SOTA User's Guide - Version 1.0 |
In SOTA static metrics sums up all metrics of the project that are obtained by static analysis of the source code. They are determined while parsing the source code and it is not necessary to execute the program contrary to the coverage metrics. The metrics provide on the one hand a means of estimating the complexity of the source code in terms of different criteria thus giving the user an indicator for enhancing the structure of the source code. On the other hand they enable the user to assess the costs of testing and the number of different tests for individual criteria respectively.
The static metrics are visible in the viewMetrics for all structures of the projects right after loading it. The values of the cyclomatic and essential complexity for classes, files and the project are the maximum of the values of their subordinate functions, for all other metrics these values are summed up.
Notes on the ModBI and BI values: The entire scope of exception handling eliminate the possibility to identify paths precisely. Therefore the computed value is always a lower bound, i.e. the minimal number of paths and sub-paths respectively that will be reached during the ModBI and BI test.
Cyclomatic complexity is computed using the control flow graph which represents all paths that might be traversed during program execution and their branching habits(cf. view CFG). The cyclomatic complexity z(G) is defined as: z(G) = e - n + 2 where e is the number of edges and n is the number of nodes of the control flow G. Therefore a function without branches in the program flow always has a cyclomatic complexity of 1, and each branch, e.g. an if-statement, increases the cyclomatic complexity by 1.
The definition of the of the essential complexity is closely related to the cyclomatic complexity. After recursively deleting all primitive control structures from a given control flow graph G, the cyclomatic complexity of the resulting graph G' is defined as the essential complexity e(G) of the graph G: e(G) = z(G'). All simple structures which contain no jumps, with the exception of break instructions in switch statements, are considered primitive structures. The existence of jumps out of control structures makes these structures and all structures including them irreducible thus increasing the value of the essential complexity.
The number of the lines of code is listed here as one of the most primitive metrics of the source code, encompassing the appropriate structure. In contrast to all other metrics SOTA computes, LOC strongly depends on the structure of the source code and also the commentary. Therefore it should be regarded with care.
Unlike the Lines-of-code-metric the number of statements offers an objective, format-independant metric for the extent of the project. In order to compute this metric all executable statements are summed up for all structures. The test of the coverage of statements consists of comparing the number of executed statements with the number of all statements.
The number of branches is defined functionally in SOTA as a way of computing the branch coverage. While the number of branches in a function equates to the cyclomatic complexity - 1, in this case the number of branches is defined as the sum of the outputs of all branching nodes. So, for a function without branches the number of branches is zero, for each added if-statement the number increases by two.
The number of modified boundary-interior paths corresponds to the number of subpaths through the control flow graph which have to be tested to fully execute the modified boundary-interior paths coverage test. The different kinds of subpaths are defined according to Liggesmeyer (Software-Qualität, 2002) as follows:
In the View CFG the user can find the number of subpaths which have to be tested according to the above definition for each loop in the node info (double-click on the appropriate node). Here the value is listed under 'ModBI'. In the node info of the function node the value for the entire function is listed as well as the values for subpaths of loops and subpaths through the entire funtion.
Analogous to the metric above, here the number of boundary-interior paths is specified for each funtion and accordingly for classes, files and the project the sum of all values contained in them. The corresponding paths are defined as all executable paths through the funtion in which the limit of the number of paths applies, so that on occurence of loops only those paths need to be tested where for each loop
To compute the number of statements with logical conditions all occurences of statements with evaluable logical conditions in the source code are summed up. Infinite loops ('while(true)') and loops iterating over a set ('for(Item item : set)') are not counted explicitly.
This metric corresponds to the sum of evaluable atomic conditions from all logical conditions. The logical atoms true and false are not counted since they are not evaluable with regard to the coverage test for conditions and have no influence on the control flow.
The number of logical conditions contains the sum of all atomic and compound conditions. This values is important for computing the minimum multiple condition coverage. Die Anzahl der logischen Bedingungen enthält die Summe aller atomaren und zusammengesetzten Bedingungen. Dieser Wert ist für die Berechnung der minimal Mehrfach-Bedingungsüberdeckung wichtig.
The actual aim of SOTA is to evaluate program tests by computing coverage metrics. By including instrumentations a log file is created with the necessary data allowing SOTA to reconstruct the program flow and the evaluation of the conditions in retrospect. From these data the most common coverage metrics are determined for the individual tests. These are then listed in the View Coverage.
The test for Function-Entry-Exit-Coverage requires all inputs and outputs of each function to be regarded for full coverage. It is computed as follows:
In Java there exists only one input for each funtion. Counted as possible outputs are the normal function ending, in case it was reached, as well as all return-statements and all throw-statements outside of try-structures.
For the statement coverage it is necessary that every statement in the source code was executed. Since each statement is only listed in the log file after its execution when the source code has been instrumented according to instrumentation level 3, the statement coverage is usually determined from the logged key data of the control flow after the program test.
Note: In the view CFG not all nodes correspond to statements and not every statement corresponds to a node. Therefore the C0-coverage cannot be computed from the covered nodes of the control flow graph, it is rather based on the value #Statements from the view Metrics.
The full branch coverage was reached, if all branches of the control flow graph are covered. Computing the percentage coverage is done differently in practice, to simplyfy matters SOTA computes this on the basis of the branches (cf. 7.1.5) as follows:
The simple condition coverage excusively tests whether all logical atoms of the conditions were evaluated true as well as false. However, this does not mean that branch coverage was reached as a minimum goal, therefore it is hardly possible to draw any conclusions from the simple condition coverage. For computing the percentage coverage SOTA counts all evaluations of each atom and compares them with the target value.
The minimal multiple condition coverage has established itself as a practible condition coverage which also includes the branch coverage. Analogous to C2 all evaluation of the logical atoms are regarded here as well as all compound, complex conditions. These have to be evaluated as true as well as false during the tests. The number of logical structures which have to be analysed corresponds to the number of logical conditions listed under 6.1.10.
An even more exact test criterion is the modified condition/decision coverage test. To fulfill this coverage is not only necessary that all logical atoms of every condition adopt the values true and false. Additionally, it should apply for each atom that configurations of these conditions exist, which only differ in this atom and lead to an alternative evaluation of the complete condition. This ensures that the test checked whether changing the logical value of each atom would have an influence on the total condition. The two truth vectors of a condition fulfilling these requirements for an atom are called MCDC-couple. The coverage metric is then calculated using the MCDC-couples as follows:
The multiple condition coverage test requires the most comprehensive condition test, since all truth vectors of every condition need to be tested. This means the costs for the test grow exponentially with the number of conditions. Additionally, in most cases it is not possible to apply all combinations of truth values. However, these impossible combinations usually cannot be recognized easily. The costs for testing 2^(#atoms) is merely reduced by using short-circuit-operators which stop evaluating the condition as soon as the result of the complete condition was determined irrevocably.
The modified boundary-interior path coverage test is a test proposed by Liggesmeyer which reduces the test cases compared to the boundary-interior path test (see definition in 7.2.9). In order to compute the coverage metric, it is necessary to compute the MBI-paths covering the paths for every one of them using a function during program testing. Then the sum of these covered subpaths is compared with the number of possible MBI-paths as defined in 7.2.9.
Since the number of ModBI-paths is only a minimum of possible subpaths according to this criterion, in practice more MBI-paths may be traversed (e.g. due to exceptions) than defined by this minimal bound. In this case the coverage value is naturally limited to 1.
The boundary-interior path coverage is computed like the modified boundary-interior path coverage. However, the BI-paths are computed only for the paths going through a funtion and then this value is compared to the number of possible BI-paths.
In order to allow the user to limit the memory requirements of the log files sensibly and variably, the source code can be instrumented in different levels. A configuration of the instrumentation is combined in an instrumentation scheme, short IScheme, and saved for the corresponding project. For all projects three basic ISchemes which correspond to instrumenting the code according to the respective levels, are provided by SOTA.
Assigning level 0 as an instrumentation level for a structure causes this structure to be excluded from the instrumentation. This is sensible for functions which create a lot of log information (due to frequent execution or complex function flows), but have been tested adequately and can be excluded from further testing.
The basic instrumentation is offered via level 1. Here all function entires, exits and all branching structures are instrumented, so that the control flow through the functions is can be reconstructed from these data. With these data it is possible to compute all coverage metrics except for the condition coverage.
Additionally to level 1, the instrumentation according to level 2 also saves the configuration for each atom, provided it would also be evaluated in the program, in the log file. These data allow SOTA to compute the metrics as in level 2 as well as the condition coverage metrics for the program test.
Finally, SOTA offers a full instrumentation of the source code with the instrumentation according to level 3. Next to the evaluated atoms, the log file will also include entries about the execution of all individual statements. Therefore the log file is considerably larger compared with the other instrumentation levels. This option of instrumentation is not only offered for the sake of completeness but also permits a detailed analysis of the control flow for programs terminating in an unusual way and exception handling.
Ant/Ant Buildfile |
Apache Ant is a common tool, comparable to make, in Java development for automatically compiling source projects. Destinations and commands for the compilation are stored in an XML file, the Ant buildfile, which Ant can read and then execute the compilation. When using Eclipse it is possible to easily export an Ant buildfile via File -> Export -> Ant Buildfile zu exportieren. |
ASC-Logger.ini / ASCLogger.jar |
The testing of Java programs requires a logging component named ASCLogger.jar which has to be included into the project, then it administers saving the coverage data. The inclusion in Eclipse is done via Project -> Properties -> Java Build Path -> Add JARs or Add External JARs, depending on whether the user included the ASCLogger library into the project or is loading it from the SOTA directory. Information about the individual test cases, i.e. project name, test name, description and used IScheme are provided via the initialization file named ASCLogger.ini which is created on starting the test, written into the execution directory of the test program and then read by ASCLogger. |
Execution Directory of the Test Program |
The execution directory of the test program is the directory from where the program is started, i.e. the directory where java -cp .. classname is executed or, when using a start script, the directory containing this batch file. In RCP-development with Eclipse the RCP-program is started from the base directory of the platform, i.e. Eclipse. In this case the execution directory is "..\eclipse\". The ASCLogger.ini is put into the execution directory. This file contains information about the test for the logging component. The log files are also written into this directory. |
Base Directory of the Test Program |
The base directory of the test program is its root directory where all source files and binaries (possibly in subdirectories) are located. From here are the sources and the project imported and the coverage report is put into this directory. |
Base Directory of SOTA |
The base directory of SOTA is "..\SOTA\". Here are the executable SOTA.exe and the library ASCLogger.jar stored. Aditionally, the project file <projectname>.project as well as the log file of SOTA with all program outputs are created in this directory. |
Dynamic Program Test |
Every test of a program that requires the program to be executed is a dynamic program test. Amongst those are functional (Black-Box-) and structure-oriented (White- or Glass-Box-)Tests. As a tool for structure-oriented testing SOTA calculates the nine different coverage metrics for each test. |
Instrumentation Scheme / IScheme |
SOTA offers several levels of instrumentation in order to enable limiting the overhead evoked by the instrumentation. An instrumentation scheme (short: IScheme) contains information about a specific way of instrumenting the project, i.e. it provides a mapping of all functions of the project to an instrumentation level. SOTA always includes the three basic ISchemes allowing instrumentation according to levels 1, 2 and 3. If a new IScheme is created, these data will be saved in the project file <projectname>.project in the base directory of SOTA and will be available for usage in this project. |
Start Script / Batch File |
The start script (a batch file under Windows) is a file causing the start of the test program when executed. Therefore it merely has to contain a typical Java call "java -cp .. classname" in way specified for the project. If the start script is included in SOTA, the test program will be executable in manual program testing with SOTA. |
Static Program Analysis |
Contrary to the dynamic program test, the static program analysis is done without executing the project. The information needed for the static analysis is determined only by parsing the program. This way SOTA identifies ten different static metrics which provide information about the structure and the complexity of the program and their components respectively. |