How to Use LTP
Summary
LTP is the Linux Test Project. This document covers running the LTP on an embedded board running linux and interpreting the results.
Prerequisites
- This doc covers only how to run LTP, not how to build it. A built copy of LTP installed on the board is a pre-requisite. To satisfy this requirement, include LTP software in your factory build (TSWO_SOFTWARE_ltp).
- LTP assumes that certain users and groups exist on the system. To ensure this, run ./IDcheck.sh from the ltp installation directory. It will prompt you to create missing users and groups, if necessary.
- LTP will use commands from coreutils, cpio, findutils, gawk, grep, shadow, sed, tar, time, util-linux, which, mktemp, module-init-tools, procps OR the equivalent busybox applets.
- When busybox applets are used some test may fail because the applets have less functions than the "real" commands.
- LTP requires libcap to build correctly.
- Some LTP tests use bash scripts and require bash to be installed.
- The LTP from the community does not build correctly under uClibc, especially when uClibc has been paired down via config options. Glibc is required.
- In order to generate HTML reports, perl is required on the target.
- Network tests require utilities from net-tools, iproute, iputils OR the equivalent busybox applets.
- Some LTP tests are intended to test particular linux commands. In order for these tests to pass, the command must be installed (obviously). Some examples include: gzip, zip/unzip, tar, eject, crontab.
- Some of the async io tests require libaio. These tests will not pass unless LTP is built with libaio (in Factory, just select TSWO_SOFTWARE_libaio). Libaio will not build on all platforms.
If you are using Factory on your desktop, then you can run "make checktool-ltp" to check your platform for compatibility with LTP.
Note: At the beginning of each test run, ltp will attempt to gather and print information about the software installed on your system. Depending on what you have installed, some of this section may result in "garbage" values. In particular, using busybox to replace coreutils will cause many of the printouts to fail.
(Expected) Test Failures
Before we get into running LTP, we should consider expectations. Generally, some tests in the LTP will fail on any test run. This does not necessarily indicate a faulty version of linux. Each test failure must be evaluated to see if is a faulty test or hardware/environment that does not meet test assumptions. Information about known problems in general and with specific boards is included at the end of this document.
Running LTP
LTP comes with a script to run the LTP test suites. In the simplest case, simply running this script will run the default set of tests. Much of the interesting output appears on the console, so generally this should be captured to a file. In our example, ltp is installed under /usr:
cd /usr/ltp-20081231 ./runltp 2>&1 | tee ltp.out
By default, the script will create two output files. A log file will appear under results directory. The log file lists each test run and its result. In addition to the log file, a failed test command file will written under output directory. The default name for both of the files is based on the date and time of the test run. More on fail test files later.
The runltp script accepts many options, run ./runltp -h to see them all. We will cover many of the more useful options in this manual.
usage: ./runltp [ -a EMAIL_TO ] [ -c NUM_PROCS ] [ -C FAILCMDFILE ] [ -d TMPDIR ] [ -D NUM_PROCS,NUM_FILES,NUM_BYTES,CLEAN_FLAG ] -e [ -f CMDFILES(,...) ] [ -g HTMLFILE] [ -i NUM_PROCS ] [ -l LOGFILE ] [ -m NUM_PROCS,CHUNKS,BYTES,HANGUP_FLAG ] -N -n [ -o OUTPUTFILE ] -p -q [ -r LTPROOT ] [ -s PATTERN ] [ -t DURATION ] -v [ -w CMDFILEADDR ] [ -x INSTANCES ] [ -b DEVICE ] [-B DEVICE_FS_TYPE] -a EMAIL_TO EMAIL all your Reports to this E-mail Address -c NUM_PROCS Run LTP under additional background CPU load [NUM_PROCS = no. of processes creating the CPU Load by spinning over sqrt() (Defaults to 1 when value)] -C FAILCMDFILE Command file with all failed test cases. -d TMPDIR Directory where temporary files will be created. -D NUM_PROCS,NUM_FILES,NUM_BYTES,CLEAN_FLAG Run LTP under additional background Load on Secondary Storage (Seperate by comma) [NUM_PROCS = no. of processes creating Storage Load by spinning over write()] [NUM_FILES = Write() to these many files (Defaults to 1 when value 0 or undefined)] [NUM_BYTES = write these many bytes (defaults to 1GB, when value 0 or undefined)] [CLEAN_FLAG = unlink file to which random data written, when value 1] -e Prints the date of the current LTP release -f CMDFILES Execute user defined list of testcases (separate with ',') -g HTMLFILE Create an additional HTML output format -h Help. Prints all available options. -i NUM_PROCS Run LTP under additional background Load on IO Bus [NUM_PROCS = no. of processes creating IO Bus Load by spinning over sync()] -l LOGFILE Log results of test in a logfile. -m NUM_PROCS,CHUNKS,BYTES,HANGUP_FLAG Run LTP under additional background Load on Main memory (Seperate by comma) [NUM_PROCS = no. of processes creating main Memory Load by spinning over malloc()] [CHUNKS = malloc these many chunks (default is 1 when value 0 or undefined)] [BYTES = malloc CHUNKS of BYTES bytes (default is 256MB when value 0 or undefined) ] [HANGUP_FLAG = hang in a sleep loop after memory allocated, when value 1] -N Run all the networking tests. -n Run LTP with network traffic in background. -o OUTPUTFILE Redirect test output to a file. -p Human readable format logfiles. -q Print less verbose output to screen. -r LTPROOT Fully qualified path where testsuite is installed. -s PATTERN Only run test cases which match PATTERN. -t DURATION Execute the testsuite for given duration. Examples: -t 60s = 60 seconds -t 45m = 45 minutes -t 24h = 24 hours -t 2d = 2 days -T REPETITION Execute the testsuite for REPETITION no. of times -v Print more verbose output to screen. -w CMDFILEADDR Uses wget to get the user's list of testcases. -x INSTANCES Run multiple instances of this testsuite. -b DEVICE Some tests require an unmounted block device to run correctly. -B DEVICE_FS_TYPE The filesystem of test block devices.
Controlling tests to be run
There are two basic flags for controlling which tests are run. By using the -f flag, you can select an exact list of tests to run. This flags argument is a comment separate list of files. Each of these files contains a list of tests to be run. LTP comes with a series of command files in the runtest directory. To use these commands, you can just use the base name of the file with no path. For example, this will run just the scheduling tests:
./runltp -f sched
The other way to control what tests are run is to use the -s flag. This flag accepts a regex as an argument and only tests which match that regex are run. For example, this will run only the tests that start with mkdir:
./runltp -s ^mkdir
The -s and -f flags can be used in conjunction. For example, this will run only the mkdir test from the command suite (and exclude the syscall mkdir tests):
./runltp -f commands -s ^mkdir
By using the -f and -s flags you can exercise careful control over the tests to be run. For greater control, you may also write your own command files rather that using the provided ones. Using the provided files as a guide, write your own files and place them in the runtest directory.
Failed Test Command Files
A common operation is to run the ltp testsuite, then work on fixing or investigating failures. During this time, running only the test that have failed is desirable. Of course, the -s and -f options can be used to set the tests that are run. Another option is failed test command files. For each run, the test runner will output a file listing all of the failed tests. The -C flag controls the name of the failed test file, and the file will be written to the output directory. This file is in the proper format to be a command file. Therefore, we can pass this file to the -f flag to run only failed tests. Since this flag is not in the runtest directory, we must pass an absolute path. This example runs all of ltp default tests, the re-runs only the tests that fail:
./runltp -C failed.tests ./runltp -f `pwd`/output/failed.tests
LTP Output
The most important output of LTP, of course, is the results of each test. LTP will output a PASS/FAIL result for each test run. In addition, detailed output from each test run is also available (i.e. stdout/err from the test).
Human-readable results
The default output when no flags are passed in is human-readable output. This output will list each test name, result, and exit value in a human-readable tabular format. In addition, the start time of the test is displayed at the top, and a summary is displayed at the end.
The name of the output file is based on the date/time of the test run by default. Using the -l option can control this name. However, when using this option, the default output type is script-readable. To get human-readable results, also pass -p.
./runltp -l myresults.log -p
Script-readable results
In many cases it is desirable to automate some or all of the test runs or interpretation of the results. To facilitate this, ltp supports a script-readable results format. To activate this output format, use the -l option and set a output log file name.
./runltp -l myresults.log
The output still includes the startup time for the suite as a string. After that there is one entry per test, which includes a start time (as sec from epoch), duration, exit code, as well as other data for each test. The summary is no longer included, although this is easily calculated from the data for each test. As a quick-and-dirty example, this sed script will convert the test-by-test result into a csv file:
sed 's/[^ ]*=\([^ ]*\) \?/\1,/g' results/myresults.log
HTML results
The LTP will also generate an HTML report of a test run. This file is generated in addition to either the human-readable or script-readable output. To active this use the -g option and pass a name for the html file:
./runltp -g myresults.html
The html file will be written into the output directory. Generating the html output requires perl on the target (the html generator is written in perl).
Test messages (stdout/stderr)
In addition to the results, each test generally prints some information to stdout/stderr. This information is critical in diagnosing tests, especially in the case of a failure. The output for each test is bounded by and . In addition, the test runner itself prints some status messages at the beginning of the test run, and some at the end. The simplest way to capture all of this output is to redirect it to a file. For example, this will capture stdout and stderr from the run and save it in a file as well as displaying it on the screen:
./runltp 2>&1 | tee ltp.out
Comparing Test Results
Since many of tests in LTP can be expected to fail in many situations, determining the relative quality of a linux platform with LTP can be challenging. One useful technique is a comparative analysis. By running LTP regularly you can establish whether changes to the platform are having a positive or negative impact and also identify specific regressions. LTP does not put out a format that is easily comparable by default. However, Timesys has added a script to LTP that allows comparison of the script-readable results file. Run as follows:
./diffresults.sh results/file1.log results/file2.log
The output is the same as the diff command. It will output the lines that differ (with timing related information removed).
< tag=execve04 exit=exited stat=4 core=no --- > tag=execve04 exit=exited stat=0 core=no
In the above example, the execve04 test failed with error code 4 in the first run, but passed in the second (error code 0).
Known Issues with LTP
Increasing size of /tmp
Many of the tests write out temporary data to the filesystem. Some of them write out a large amount. For example, some test are for "large file" support, and so must write very large files. The default location to write these files to is /tmp. Unfortunately, on many setups /tmp is mounted as tempfs. This means that the contents are stored in memory. Needless to say, this make it a bad area for LTP temporary data. The solution is to simple mount some larger storage area there (NFS mount or, even better, a harddrive) before running LTP. In many cases you can just unmount /tmp, as the root partition will already be on suitable storage. Alternatively, you can also instruct ltp to use a different directory for temp data with the -d option. Note: Changing temp data to an NFS mount will cause other problems, see below.
Problems with NFS mounts
The NFS filesystem has slightly different semantics than other "normal" linux filesystems. This can cause problems with some tests, as they are either testing or indirectly relying on the "normal" semantics. The best fix for both this and the tmps size problem is to use a regular block device (e.g. harddrive) with a normal linux filesystem (e.g. ext3) as your temp area. In situations where this is not possible, you can re-run only the failed tests (see above for instructions) with the temp storage area on tmpfs. Few tests rely on both a large storage capacity and a non-NFS filesystem.
Dealing with "hackbench"
One of the test for the scheduler in LTP is called "hackbench". This test allocates many processes or threads to run simultaneously. By default, it uses 2000 processes and 800 threads. Unfortunately, there is often not enough memory on an embedded system for this. The out of memory handler will be triggered, which can cause the test to fail or even hang. To solve this, two options are available. The first is to increases the available memory by using a swapfile (aka virtual memory). Since this is likely to be temporary, you can use a regular file as swap. However, this file must be local and will not work over NFS. Here is the basic procedure for creating and activating a swap file (this is for 1GB):
dd if=/dev/zero of=/root/swapfile bs=1M count=1024 mkswap /root/swapfile swapon /root/swapfile
You may also use a block-device directly instead of a file. For example, to use the 2nd partition on your first drive:
mkswap /dev/sda2 swapon /dev/sda2
Using a swap file is a great way to increase memory if a suitable filesystem or block-device is available. Unforunately, in many cases embedded boards are run with a NFS-based filesystem and may or may not have an appropriate block device. In this case, the only option is to reduce the size of the test itself. A short test can determine how many threads and process your system can handle. The following runs the hackbench test the default number of threads and processes, but minimal iterations:
./testcases/bin/hackbench 50 process 10 ./testcases/bin/hackbench 20 thread 10
Each should generally take a few minutes or less if they work. If either triggers the out of memory handler, then run them again with the first number lower (the number of threads/process is 40x that number). You'll need to look on the console to see the out of memory messages (or run dmesg). Once you have a number of thread/processes that works for you, edit ./runtest/sched and change the hackbench lines accordingly.
Board-specific Issues
OMAP 35xx
The kernel has a bug handling unaligned memory access from user programs. This affects the "epoll" test from the kernel syscall test suite. The test will hang indefinitely. Comment out this test in runtest/syscalls before running LTP.