Monday, December 31, 2007

On Error Statement

On Error Statement
See Also
Err Object | Exit Statement
Requirements

Version 1
Enables or disables error-handling.
On Error Resume Next
On Error GoTo 0
Remarks
If you don't use an On Error Resume Next statement anywhere in your code, any run-time error that occurs can cause an error message to be displayed and code execution stopped. However, the host running the code determines the exact behavior. The host can sometimes opt to handle such errors differently. In some cases, the script debugger may be invoked at the point of the error. In still other cases, there may be no apparent indication that any error occurred because the host does not to notify the user. Again, this is purely a function of how the host handles any errors that occur.
Within any particular procedure, an error is not necessarily fatal as long as error-handling is enabled somewhere along the call stack. If local error-handling is not enabled in a procedure and an error occurs, control is passed back through the call stack until a procedure with error-handling enabled is found and the error is handled at that point. If no procedure in the call stack is found to have error-handling enabled, an error message is displayed at that point and execution stops or the host handles the error as appropriate.
On Error Resume Next causes execution to continue with the statement immediately following the statement that caused the run-time error, or with the statement immediately following the most recent call out of the procedure containing the On Error Resume Next statement. This allows execution to continue despite a run-time error. You can then build the error-handling routine inline within the procedure.
An On Error Resume Next statement becomes inactive when another procedure is called, so you should execute an On Error Resume Next statement in each called routine if you want inline error handling within that routine. When a procedure is exited, the error-handling capability reverts to whatever error-handling was in place before entering the exited procedure.
Use On Error GoTo 0 to disable error handling if you have previously enabled it using On Error Resume Next.
The following example illustrates use of the On Error Resume Next statement.
On Error Resume Next
Err.Raise 6 ' Raise an overflow error.
MsgBox "Error # " & CStr(Err.Number) & " " & Err.Description
Err.Clear ' Clear the error.
Requirements
Version 1
See Also
Err Object | Exit Statement
Err Object
See Also
On Error Statement
Requirements
Version 1
Contains information about run-time errors. Accepts the Raise and Clear methods for generating and clearing run-time errors.
Remarks
The Err object is an intrinsic object with global scope — there is no need to create an instance of it in your code. The properties of the Err object are set by the generator of an error — Visual Basic, an Automation object, or the VBScript programmer.
The default property of the Err object is Number. Err.Number contains an integer and can be used by an Automation object to return an SCODE.
When a run-time error occurs, the properties of the Err object are filled with information that uniquely identifies the error and information that can be used to handle it. To generate a run-time error in your code, use the Raise method.
The Err object's properties are reset to zero or zero-length strings ("") after an On Error Resume Next statement. The Clear method can be used to explicitly reset Err.
The following example illustrates use of the Err object:
On Error Resume Next
Err.Raise 6 ' Raise an overflow error.
MsgBox ("Error # " & CStr(Err.Number) & " " & Err.Description)
Err.Clear ' Clear the error.
Properties and Methods
Err Object Properties and Methods
Requirements
Version 1
See Also
On Error Statement
Err Object Properties and Methods
The Err object contains information about run-time errors.
Properties
Description Property
HelpContext Property
HelpFile Property
Number Property
Source Property
Methods
Clear Method
Raise Method
Description Property
See Also
Err Object | HelpContext Property | HelpFile Property | Number Property | Source Property
Applies To: Err Object
Requirements
Version 1
Returns or sets a descriptive string associated with an error.
object.Description [= stringexpression]
Arguments
object
Always the Err object.
stringexpression
A string expression containing a description of the error.
Remarks
The Description property consists of a short description of the error. Use this property to alert the user to an error that you can't or don't want to handle. When generating a user-defined error, assign a short description of your error to this property. If Description isn't filled in, and the value of Number corresponds to a VBScript run-time error, the descriptive string associated with the error is returned.
On Error Resume Next
Err.Raise 6 ' Raise an overflow error.
MsgBox ("Error # " & CStr(Err.Number) & " " & Err.Description)
Err.Clear ' Clear the error.
Requirements
Version 1
See Also
Err Object | HelpContext Property | HelpFile Property | Number Property | Source Property
Applies To: Err Object
HelpContext Property
See Also
Description Property | HelpFile Property | Number Property | Source Property
Applies To: Err Object
Requirements
Version 2
Sets or returns a context ID for a topic in a Help File.
object.HelpContext [= contextID]
Arguments
object
Required. Always the Err object.
contextID
Optional. A valid identifier for a Help topic within the Help file.
Remarks
If a Help file is specified in HelpFile, the HelpContext property is used to automatically display the Help topic identified. If both HelpFile and HelpContext are empty, the value of the Number property is checked. If it corresponds to a VBScript run-time error value, then the VBScript Help context ID for the error is used. If the Number property doesn't correspond to a VBScript error, the contents screen for the VBScript Help file is displayed.
The following example illustrates use of the HelpContext property:
On Error Resume Next
Dim Msg
Err.Clear
Err.Raise 6 ' Generate "Overflow" error.
Err.Helpfile = "yourHelp.hlp"
Err.HelpContext = yourContextID
If Err.Number <> 0 Then
Msg = "Press F1 or Help to see " & Err.Helpfile & " topic for" & _
" the following HelpContext: " & Err.HelpContext
MsgBox Msg, , "error: " & Err.Description, Err.Helpfile, Err.HelpContext
End If
Requirements
Version 2
See Also
Description Property | HelpFile Property | Number Property | Source Property
Applies To: Err Object
Number Property
See Also
Description Property | HelpContext Property | HelpFile Property | Err Object | Source Property
Applies To: Err Object
Requirements
Version 1
Returns or sets a numeric value specifying an error. Number is the Err object's default property.
object.Number [= errornumber]
Arguments
object
Always the Err object.
errornumber
An integer representing a VBScript error number or an SCODE error value.
Remarks
When returning a user-defined error from an Automation object, set Err.Number by adding the number you selected as an error code to the constant vbObjectError.
The following code illustrates the use of the Number property.
On Error Resume Next
Err.Raise vbObjectError + 1, "SomeObject" ' Raise Object Error #1.
MsgBox ("Error # " & CStr(Err.Number) & " " & Err.Description)
Err.Clear ' Clear the error.
Requirements
Version 1
See Also
Description Property | HelpContext Property | HelpFile Property | Err Object | Source Property
Applies To: Err Object
Source Property
See Also
Description Property | Err Object | HelpContext Property | HelpFile Property | Number Property | On Error Statement
Applies To: Err Object
Requirements
Version 1
Returns or sets the name of the object or application that originally generated the error.
object.Source [= stringexpression]
Arguments
object
Always the Err object.
stringexpression
A string expression representing the application that generated the error.
Remarks
The Source property specifies a string expression that is usually the class name or programmatic ID of the object that caused the error. Use Source to provide your users with information when your code is unable to handle an error generated in an accessed object. For example, if you access Microsoft Excel and it generates a Division by zero error, Microsoft Excel sets Err.Number to its error code for that error and sets Source to Excel.Application. Note that if the error is generated in another object called by Microsoft Excel, Excel intercepts the error and sets Err.Number to its own code for Division by zero. However, it leaves the other Err object (including Source) as set by the object that generated the error.
Source always contains the name of the object that originally generated the error — your code can try to handle the error according to the error documentation of the object you accessed. If your error handler fails, you can use the Err object information to describe the error to your user, using Source and the other Err to inform the user which object originally caused the error, its description of the error, and so forth.
When generating an error from code, Source is your application's programmatic ID.
The following code illustrates use of the Source property.
On Error Resume Next
Err.Raise 6 ' Raise an overflow error.
MsgBox ("Error # " & CStr(Err.Number) & " " & Err.Description & Err.Source)
Err.Clear ' Clear the error.
Requirements
Version 1
See Also
Description Property | Err Object | HelpContext Property | HelpFile Property | Number Property | On Error Statement
Applies To: Err Object
Clear Method
See Also
Description Property | Err Object | Number Property | OnError Statement | Raise Method | Source Property
Applies To: Err Object
Requirements
Version 1
Clears all property settings of the Err object.
object.Clear
The object is always the Err object.
Remarks
Use Clear to explicitly clear the Err object after an error has been handled. This is necessary, for example, when you use deferred error handling with On Error Resume Next. VBScript calls the Clear method automatically whenever any of the following statements is executed:
• On Error Resume Next
• Exit Sub
• Exit Function
The following example illustrates use of the Clear method.
On Error Resume Next
Err.Raise 6 ' Raise an overflow error.
MsgBox ("Error # " & CStr(Err.Number) & " " & Err.Description)
Err.Clear ' Clear the error.
Requirements
Version 1
See Also
Description Property | Err Object | Number Property | OnError Statement | Raise Method | Source Property
Applies To: Err Object
Raise Method
See Also
Clear Method | Description Property | Err Object | Number Property Source Property
Applies To: Err Object
Requirements
Version 1
Generates a run-time error.
object.Raise(number, source, description, helpfile, helpcontext)
Arguments
object
Always the Err object.
number
A Long integer subtype that identifies the nature of the error. VBScript errors (both VBScript-defined and user-defined errors) are in the range 0–65535.
source
A string expression naming the object or application that originally generated the error. When setting this property for an Automation object, use the form project.class. If nothing is specified, the programmatic ID of the current VBScript project is used.
description
A string expression describing the error. If unspecified, the value in number is examined. If it can be mapped to a VBScript run-time error code, a string provided by VBScript is used as description. If there is no VBScript error corresponding to number, a generic error message is used.
helpfile
The fully qualified path to the Help file in which help on this error can be found. If unspecified, VBScript uses the fully qualified drive, path, and file name of the VBScript Help file.
helpcontext
The context ID identifying a topic within helpfile that provides help for the error. If omitted, the VBScript Help file context ID for the error corresponding to the number property is used, if it exists.
Remarks
All the arguments are optional except number. If you use Raise, however, without specifying some arguments, and the property settings of the Err object contain values that have not been cleared, those values become the values for your error.
When setting the number property to your own error code in an Automation object, you add your error code number to the constant vbObjectError. For example, to generate the error number 1050, assign vbObjectError + 1050 to the number property.
The following example illustrates use of the Raise method.
On Error Resume Next
Err.Raise 6 ' Raise an overflow error.
MsgBox ("Error # " & CStr(Err.Number) & " " & Err.Description)
Err.Clear ' Clear the error.
Requirements
Version 1
See Also
Clear Method | Description Property | Err Object | Number Property Source Property
Applies To: Err Object

Mercury QuickTest Professional Scripting Guide

Mercury QuickTest Professional Scripting Guide
Proprietary and confidential to Mercury Interactive Corporation unless otherwise noted.
Page 1
QUICKTEST PROFESSIONAL SCRIPTING GUIDE
AUTHORISATION
Prepared By
Reviewed By Authorised By
Jose Barranco Mark Haley Barry Baker
Prabhakar Rao
Selvan Nithiy
VERSION HISTORY
Version
Date Prepared By
Reason for Change
V 1.0 04/08/2006 Jose Barranco Initial Draft
Mercury QuickTest Professional Scripting Guide
Proprietary and confidential to Mercury Interactive Corporation unless otherwise noted.
Page 2
Content
Abstract ……………………………………………………. 3
1. Test Script Modularity ……………………………………. 4
1.1. Introduction ……………………………………………………. 4
1.2. Identifying Reusable Actions (Planning) ……………………. 4
1.3. Factors to consider when using Multiple Reusable Actions ……. 5
1.4. Reusable Actions organization ……………………………. 5
1.5. Working with functions ……………………………………. 5
1.6. VBS file organization ……………………………………………. 6
2. Data Driving Scripts ……………………………………………. 7
3. Script Presentation ……………………………………………. 10
3.1. Scripting Sections ……………………………………………. 10
3.1.1 Initialisation Section ……………………………………………. 10
3.1.2 Verification Section ……………………………………………. 10
3.1.3 Auxiliary Section ……………………………………………. 11
3.1.4 Cleanup Section ……………………………………………. 11
3.1.5 Return code ……………………………………………………. 11
3.1.6 Grouping Blocks ……………………………………………. 12
3.2. Code Layout ……………………………………………………. 12
3.2.1 Indentation ……………………………………………………. 12
3.2.2 Header ……………………………………………………………. 12
3.2.3 Comments ……………………………………………………. 13
3.3. Naming Conventions ……………………………………………. 13
4. Making Tests Robust ……………………………………………. 15
4.1. Synchronisation ……………………………………………. 15
4.1.1 Synchronisation Point ……………………………………………. 15
4.1.2 Exit and Wait Statements ……………………………………. 16
4.1.3 Global Timeout ……………………………………………. 16
4.2. Errors and Exception Handling for Test Scripts ……………. 17
4.2.1 Handling Errors ……………………………………………. 17
4.2.2 Handling Exceptions ……………………………………………. 18
5. Results Logging ……………………………………………. 20
6. Version Control ……………………………………………. 22
Appendix A: Error Codes ……………………………………. 23
Appendix B: Template for Test Script Header ……………. 27
Appendix C: VBScript Useful Objects ……………………. 28
Mercury QuickTest Professional Scripting Guide
Proprietary and confidential to Mercury Interactive Corporation unless otherwise noted.
Page 3
Abstract
The process of building automated test scripts is similar to the process of software
development. Thus, Software Engineering best practices should be implemented to
ensure that the code is: readable and comprehendible, maintainable and reusable
where possible. Development of automated scripts should follow phases of Software
Development Life Cycle – Plan, Analysis and Design, Coding and Testing,
Deployment and Support. Appropriate review process should be organized to assure
quality of the deliverables for each of the phases. Well-designed re-usable test cases
should be considered as a Configuration Item, and it should be controlled by a
Configuration Management system, when it makes sense from the business
perspective.
This document provides an overview of Test Scripting Best Practices based on the
experience of implementing Test Automation tools by Mercury Interactive
Professional Services Organization as well as internal use of these tools by Mercury
Interactive QA Department, and the objective is to provide scripting guidelines and
techniques to be followed for effective test automation using Mercury’s QuickTest
Professional.
The following sections are covered in this document:
Test Script Modularity and Data Driving
Scripting presentation
Synchronisation
Error and Exception Handling
Result Logging
VBScript Useful Objects
Mercury QuickTest Professional Scripting Guide
Proprietary and confidential to Mercury Interactive Corporation unless otherwise noted.
Page 4
1. Test Script Modularity
1.1 Introduction
When you plan a suite of tests you may realize that each test requires one or more
identical activities, such as a Logging Business Process that can be repeated a number
of times in different tests and different Business Processes.
Rather than recording the business process a number of times in a number of tests,
and enhancing this part of the script (e.g. with Ckeckpoints and Parameterisation)
separately for each test, you can create an action that performs the operation in the
application in one test.
Once you are satisfied with the action you recorded and enhanced, you can make the
action reusable and insert it into other tests. These external/reusable actions can then
be used and reused in every test case that requires similar functionality, enabling
testers to use a single action in hundreds of different test cases with little or not extra
effort. This eliminates many lines of redundant script code that used to be created,
stored and maintained for each business process.
Identifying Reusable Actions (Planning)
1.2 Identifying Reusable Actions (Planning)
The Business Analyst or SME (Subject Matter Expert) should identify possible
reusable activities (number of steps comprising a Business Process or Sub-Business
Processes) that can be implemented in a Reusable Action during the planning stage or
during the implementation of the Test Requirements
SMEs should plan for cases where discrete business processes can be built once and
used many times, and when it is possible SMEs will determine how tests (called
actions) should be placed in main calling tests and whether nesting tests are required
for the Business Process.
Figure 1. Reusable Actions in QuickTest Professional
Reusable
Action
LOGGING IN ACTION
Test 1
Test 2 Test 3
Mercury QuickTest Professional Scripting Guide
Proprietary and confidential to Mercury Interactive Corporation unless otherwise noted.
Page 5
Once a number of reusable activities have been identified the team might also create
an action library as part of organising the Testing Project, so other testers can start
calling (using) the Reusable Actions.
In the real world, due to various constraints and limitations (e.g. company resources,
manpower, etc…) the reusable activities are identified during the recording process,
as the QTP Expert or Tester becomes familiar with the steps to follow in the AUT
(Application Under Test) in order to carry out a test.
1.3 Factors to consider when using Multiple Reusable Actions
When using multiple, reusable actions, keep the following in mind:
· Actions can be called or copied from another test
· Run settings have to be set per-action
· Parameters and data from the called test are reflected in the calling test
· An action can be deleted or an action call can be deleted
· You can position your action calls separately or nest them with other actions
· A test has a limitation of 255 actions (The reason is that each Action has a data
table sheet. The Formula 1 control has a limitation of 255 sheets so the
limitation is 255 Actions per test. You may be able to add additional Actions
after reaching the 255 "limit", however those Actions will not be able to access
a local data sheet.)
· A Reusable Action cannot received or return an array as a parameter, however
you can workaround it by passing a string with tokens (delimiter) specifying the
different elements.
1.4 Reusable Actions organization
Reusable actions should be grouped into libraries in the way that library includes no
more than 20 scripts. There is no technical limitation for the number of reusable
actions per library (QTP test), but rather convenience of library’s maintenance. Each
action should have detailed documentation – this should be done via QTP UI and thus
the information is available for the users while browsing the actions. The
documentation should include the action’s objective, parameters and return code. The
author’s name is stored automatically.
1.5 Working with functions
In addition to the test objects, methods, and built-in functions supported by the
QuickTest Test Object Model, you can define your own function libraries
containing VBScript functions, subroutines, modules, and so forth, and then
call their functions from your test.
Mercury QuickTest Professional Scripting Guide
Proprietary and confidential to Mercury Interactive Corporation unless otherwise noted.
Page 6
1.6 VBS file organization
VB Script development standards should be applied for the organization of the VBS
library. The library should be loaded into as following:
1. Select Test->Settings menu in QTP - see Test Settings dialog
2. Select Resources tab in the Test Settings dialog
3. Click on the + button and click on the file button
4. Select All files option for the File Types in the File Open dialog
5. Navigate to the required VBS file
6. Click on Set as Default button in Test Settings dialog
Then, the functions can be used in QTP “as is” – no additional declaration is required.
The diagram below illustrates reusable actions in conjunction with reusable function
libraries.
VBScript
Function Library
Controller Script
Local Action External Action1 Local Action External Action4
External Action2 External Action3
External Action4
External Action1
External Action2
External Action3
External Action4
QTP Test
Library
Mercury QuickTest Professional Scripting Guide
Proprietary and confidential to Mercury Interactive Corporation unless otherwise noted.
Page 7
2. Data Driving Scripts
Data driving testing puts a layer of abstraction between the data and the test
script, eliminating literal (hard-coded) values in the Quicktest test script.
Because data is separated from test script, you can:
· Modify test data without affecting the test script
· Add new test cases by modifyng test data, not the test script
· Share the same test data with many other scripts
The data your test uses is stored in the design-time Data Table, which is
displayed in the Data Table pane at the bottom of the screen while you insert
and edit steps.
The Data Table has the characteristics of a Microsoft Excel spreadsheet,
meaning that you can store and use data in its cells and you can also execute
mathematical formulas within the cells. You can use the DataTable, DTSheet
and DTParameter utility objects to manipulate the data in any cell in the Data
Table.
Mercury QuickTest Professional Scripting Guide
Proprietary and confidential to Mercury Interactive Corporation unless otherwise noted.
Page 8
Methods available for Datatable, DTSheet and DTParameter
Datatable DTSheet DTParameter
AddSheet AddParameter Name
DeleteSheet DeleteParameter RowValue
Export GetCurrentRow Value
ExportSheet GetParameter ValueByRow
GetCurrentRow GetParameterCount
GetRowCount GetRowCount
GetSheet SetCurrentRow
GetSheetCount SetNextRow
Import SetPrevRow
ImportSheet
SetCurrentRow
SetNextRow
SetPrevRow
When you data drive a script, the script uses an object reference instead of the initial
hard-coded value. The object in the script will be a reference to the data table sheet
and column made through the data table object.
For further information regarding an object or an object method, users can left click
on the object or method and press F1.
Dialog(“Login”).WinEdit(“AgentName”).Set “1234” ‘hard-coded
Dialog(“Login”).WinEdit(“AgentName”).Set Datatable(“Agent”, dtLocalSheet)
‘data driven
Mercury QuickTest Professional Scripting Guide
Proprietary and confidential to Mercury Interactive Corporation unless otherwise noted.
Page 9
When working with Quality Center and Data Tables, you must save the Data
Table file as an attachment in your Quality Center project before you specify
the Data Table file in the Resources tab of the Test Settings dialog box.
You can add a new or existing Data Table file to your Quality Center project.
Note that if you add an existing Data Table from the file system to a Quality
Center project, it will be a copy of the one used by tests not in the Quality
Center project, and thus once you save the file to the project, changes made to
the Quality Center Data Table file will not affect the Data Table file in the file
system and vice versa.
Mercury QuickTest Professional Scripting Guide
Proprietary and confidential to Mercury Interactive Corporation unless otherwise noted.
Page 10
3. Script Presentation
When you write a script, you should always consider how the script might be used in
the future, as these scripts might be run repeatedly or be copied and used by other
testers. These testers might need to modify the scripts to fit their own needs.
At the very least, they will probably want to view the script code so they can better
understand what the script actually does, and how. In these situations, it is a good idea
if your scripts follow accepted organizational standards for such things as
commenting, formatting, and naming; following these standards makes it easier for
others to read, modify, and maintain the scripts.
3.1 Scripting Sections
There exist different stages (blocks or sections of code) during the implementation of
a script, and each of these sections will carry one specific function.
· Initialisation Section
· Verification Section
· Auxiliary Section
· Cleanup Section
· Return Code
3.1.1 Initialisation Section
Used for declaring variables and defining constants, the initialisation section should
always come first in a script. All dynamic test resources should be loaded in this
block. If a test needs a configuration other than the current one, the old configuration
should be saved in temporary variables, so that it is available for restoring in a
cleanup block. Depending on the objectives of the test, it should be able either to use
an application that is already running, an application that is brought to a required
initial state, or an application that is started from scratch by the test.
Shouldn’t dynamic test resources be read from a property file? This way you could
change the property file without having to change your code, recompile and re-link?
If a test needs a configuration other than the current one, the old configuration should
be saved and system variables should be changed to the required values. For example:
Extern.Declare micInteger,"WritePrivateProfileStringA",
"kernel32.dll","WritePrivateProfileStringA", micString, micString,
micString, micString
CurrValue = Environment.Value(“APPURL”)
Environment.LoadFromFile “c:\temp\myconfigdata.ini”
3.1.2 Verification Section
Each check block should consist of a set of actions followed by a verification point.
The verification points may be of two kinds – standard checkpoints available in the
test automation product, or programmed algorithmic checks; this is a custom or userdefined
checkpoint. Verification points should affect the status of the block and,
Mercury QuickTest Professional Scripting Guide
Proprietary and confidential to Mercury Interactive Corporation unless otherwise noted.
Page 11
through it, the status of the test. Note that the status definition is automatically set for
standard checkpoints.
Example of a standard checkpoint:
Browser(“Browser”).Page(“Page”).WebList(“L1”).Check Checkpoint(“Arrivals
List”)
Example of a programmed algorithmic check:
ItemsCount1 = WebList(“List1”).QueryValue(“count”)
ItemsCount2 = WebList(“List2”).QueryValue(“count”)
if ItemsCount1 <> ItemsCount2 – 1 then
...
End If
3.1.3 Auxiliary Section
The auxiliary section creates some initial conditions for the execution of check
section. The blocks may contain checks relating to correct execution. In such cases,
failures should be reported according to the reporting standard for the check blocks.
The step name of an auxiliary block should be the same as that of the check block
using the execution results of this auxiliary block.
3.1.4 Cleanup Section
The last block of the implementation section should be a cleanup block. This block
should close all open files, unload libraries, remove temporary files, and restore
system configuration to its old values stored in the initialisation block. Typically, if a
test fails, the application should be closed as well. The correct technique is to combine
all these actions in a function that is called at any exit point. This section is al used to
clear variables and objects from memory; we can use it to inform the VBScript
garbage collector that we do not need those objects any longer.
fname.Close
Environment.Value(“APPURL”) = CurrValue
Set dbConnection = nothing
3.1.5 Return Code
Each main test should return its completion status. It should be 0 if the test passed and
another value if the test failed. The statement ExtRun should be at the end of the main
action if the test is implemented as a set of nested actions, or at the end of the last
action if the test is implemented as a sequence of actions.
ExitRun (0)
In addition to ExitRun, the following statements can be used to control the return
value of an action, iteration or component:
· ExitAction
· ExitActionIteration
Mercury QuickTest Professional Scripting Guide
Proprietary and confidential to Mercury Interactive Corporation unless otherwise noted.
Page 12
· ExitglobalIteration
· ExitTest
· ExitTestIteration
· ExitComponent (when implementing BPT)
· ExitComponentIteration (when implementing BPT)
3.1.6 Grouping the blocks
To make test scripts more readable, auxiliary and check blocks, which create a
logically independent unit in a test, would be extracted in a function. If such function
is specific to a particular test, it should be stored in the functions section of the test
3.2 Code Layout
Follow these guidelines for the correct layout of your code:
3.2.1 Indentation
Screen space should be conserved as much as possible, while still allowing code
formatting to reflect logic structure and nesting. Every function, sub routine, loop and
logical condition should be aligned. Here are a few suggestions:
· Indent standard nested blocks four spaces.
· Indent the overview comments of a procedure one space.
· Indent the highest level statements that follow the overview comments four
spaces, with each nested block indented an additional four spaces.
3.2.2 Header
All scripts should begin with a brief comment describing what they do; this is called
the header. This description should not describe the implementation details (how it
does it) because these often change over time, resulting in unnecessary comment
maintenance work, or worse, erroneous comments. The code itself and any necessary
inline comments describe the implementation.
More details will be presented later on for scripting code, giving a description of
specific blocks of data or statements.
Mercury QuickTest Professional Scripting Guide
Proprietary and confidential to Mercury Interactive Corporation unless otherwise noted.
Page 13
‘’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’
‘ SUMMARY: The test checks login procedure

‘ DESCRIPTION: The test checks that login is performed
‘ when correct parameters are used

‘ RETURN CODE: standard

‘ APPLICATION: Flight

‘ NOTES: The script requires DSN”app” to be set on the
‘ machines

‘ AUTHOR: Brain Tester. 29/02/02
‘ UPDATED: Black Jack. 01/01/04

‘’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’
3.2.3 Comments
Every important variable declaration should include an inline comment describing the
use of the variable being declared.
Variables, objects, classes sub routines and functions should be named clearly to
ensure that inline comments are only needed for complex implementation details.
At the beginning of your script, you should include an overview that describes the
script, enumerating objects, procedures, algorithms, custom checkpoints, and other
system dependencies. Sometimes a piece of pseudo-code describing the algorithm can
be helpful.
You can add comments to your statements using an apostrophe ('), either at the
beginning of a separate line, or at the end of a statement. It is recommended that you
add comments wherever possible, to make your scripts easier to understand and
maintain.
3.3 Naming Conventions
The main reason for using a consistent set of coding naming conventions is to
standardize the structure and coding style of a script or set of scripts so that you and
others can easily read and understand the code. Using good coding naming
conventions results in clear, precise, and readable source code that is consistent with
other language conventions and is intuitive.
The names of functions and variables should in lower case, beginning with upper case
for each segment of the name. The names of the constants should be in upper case.
For example:
MAXCHARNUMBER – constant for maximum of characters available for selection
GetFileSize – function to determine size of the file
Public functions names from the same module should start with a common prefix. For
example, names of functions from the module supporting operations on the financial
application should start with “Fin”, as in: FinCalculateTotals.
Mercury QuickTest Professional Scripting Guide
Proprietary and confidential to Mercury Interactive Corporation unless otherwise noted.
Page 14
The names should be meaningful.
Use:
ItemsCount = WebList(“List”).QueryValue(“count”)
Instead of:
i = WebList(“List”).QueryValue(“count”)
Declarations for constants should be as follow:
const LIBPATHNAME = “X:\TESTS\LIB\”
All variables in Quick Test Professional are local (private) to the specific action.
The declaration syntax is as follows:
Dim MainItemsCount
Initialisation of variables
Dim MainItemsCount
MainItemsCount = 1
To enhance readability and consistency, use the following prefixes with descriptive
names for variables in your VBScript code.
Subtype Prefix Example
Boolean bln blnFound
Byte byt bytRasterData
Date (Time) dtm dtmStart
Double dbl dblTolerance
Error err errOrderNum
Integer int intQuantity
Long lng lngDistance
Object obj objCurrent
Single sng sngAverage
String str strFirstName
Mercury QuickTest Professional Scripting Guide
Proprietary and confidential to Mercury Interactive Corporation unless otherwise noted.
Page 15
4. Making Tests Robust
There is no a set list of problems that might occur during the execution of a test, these
problems can vary and depend on the application you are building, technologies used,
settings and environment, user software configuration and so forth.
A tester will often find that with every new project comes a new set of problems.
Only experience can better one at trying to predict the problems before they arise.
Choosing the right test strategy is an important step to achieve efficient and robust test
automation. The following section describes some of the important methods that
teams should consider.
4.1 Synchronisation
When you run a test, your application may not always respond with the same speed.
For example, it might take a few seconds:
· for a progress bar to reach 100%
· for a status message to appear
· for a button to become enabled
· for a window or pop-up message to open
You can handle these anticipated timing problems by synchronizing your test to
ensure that QuickTest waits until your application is ready before performing a certain
step.
There are several options that you can use to synchronize your test:
4.1.1 Synchronisation Point
If you do not want QuickTest to perform a step or checkpoint until an object in your
application achieves a certain status, you should insert a synchronization point to
instruct QuickTest to pause the test until the object property achieves the value you
specify (or until a specified timeout is exceeded).
To insert a synchronization point:
1. Begin recording your test.
2. Display the screen or page in your application that contains the object for
which you want to insert a synchronization point.
3. In QuickTest, choose Insert > Synchronization Point. The mouse pointer turns
into a pointing hand.
4. Click the object in your application for which you want to insert a
synchronization point.
5. Enter the property name
6. Enter the property value
7. Specify the timeout
8. Click on “OK”
You can alternatively place a synchronisation point directly the Expert View, using
VBScript
Dialog(“Login”).WaitProperty “visible”, true, 10000
Mercury QuickTest Professional Scripting Guide
Proprietary and confidential to Mercury Interactive Corporation unless otherwise noted.
Page 16
4.1.2 Exist and Wait Statements
You can enter Exist and/or Wait statements to instruct QuickTest to wait for a window
to open or an object to appear. Exist statements return a boolean value indicating
whether or not an object currently exists. Wait statements instruct QuickTest to wait a
specified amount of time before proceeding to the next step. You can combine these
statements within a loop to instruct QuickTest to wait until the object exists before
continuing with the test.
For example, the following statements instruct QuickTest to wait up to 20 seconds for
the Flights Table dialog box to open.
blnDone=Window("Flight Reservation").Dialog("Flights Table").Exist
counter=1
While Not blnDone
Wait (2)
blnDone=Window("Flight Reservation").Dialog("Flights Table").Exist
counter=counter+1
If counter=10 then
blnDone=True
End if
Wend
4.1.3 Global Timeout
If you find that, in general, the amount of time QuickTest waits for objects to appear
or for a browser to navigate to a specified page is insufficient, you can increase the
default object synchronization timeout values for your test and the browser navigation
timeout values for your test.
File > Settings > Run
Mercury QuickTest Professional Scripting Guide
Proprietary and confidential to Mercury Interactive Corporation unless otherwise noted.
Page 17
4.2 Errors and Exception Handling for Test Scripts
4.2.1 Handling Errors
By default, if an error occurs during the run session, QuickTest displays a popup
message box describing the error. You must click a button on this message box to
continue or end the run session.
You can accept the popup message box option or you can specify a different response
by choosing one of the alternative options in the list in the “When error occurs during
run session” list in File -> Settings -> Run
· Pop up message box
· Proceed to next action iteration
· Stop Run
· Proceed to Next Step
When an error occurs, the information is sent to the Test Results, therefore
automatically logged for you.
Error codes can be also be trapped in your script, allowing branch testing, conditional
modularity or merely determining whether a specific block passes or fails.
Error codes will help you find solutions to problems you may encounter while
creating or running components. Quicktest Professional uses VBScript as scripting
language, therefore the errors that can be triggered are VBScript Error codes.
If a QuickTest operation fails the user can retrieve the error code at run time using the
GetLastError built-in function.
Dialog(“Login”).WinEdit(“AgentName”).Set “1234”
LastErr = GetLastError()
Msgbox “Error number: ” & LastErr & “ – Description: “ &
DescribeResult(LastErr)
The users can also trapped VBScript errors or raise their own errors in the script,
making the script more robust thus. Once an error is trapped, the user will take a
logical decision and if necessary exit the iteration or test gracefully, making sure a
message is sent to the Test Results providing enough information for the error, state
and end condition of the application under test.
This is done through the Err object, which is an intrinsic object with global scope;
therefore there is no need to create an instance of it in your code. The generator of an
error sets the properties of the Err object.
When a run-time error occurs, the properties of the Err object are filled with
information that uniquely identifies the error and information that can be used to
handle it. To generate a run-time error in your code, use the Raise method.
Mercury QuickTest Professional Scripting Guide
Proprietary and confidential to Mercury Interactive Corporation unless otherwise noted.
Page 18
The Err object contains information about run-time errors, through the following:
Properties
· Description
· HelpContext
· HelpFile
· Number
· Source
Methods
· Clear
· Raise
The user can also enable or disable error handling; this is done through the On Error
Resume Next statement.
On Error Resume Next causes execution to continue with the statement immediately
following the statement that caused the run-time error, or with the statement
immediately following the most recent call out of the procedure containing the On
Error Resume Next statement. This allows execution to continue despite a run-time
error. You can then build the error-handling routine inline within the procedure. Note
that the error is not corrected, just ignored, and an error message is not displayed.
Use On Error GoTo 0 to disable error handling if you have previously enabled it
using On Error Resume Next.
Be aware that if you place the statements in a function, it will only apply to the
function and not the entire action or test.
Dim objFoo
' Enable error handling
On Error Resume Next
Set objFoo = Script.CreateObject("Foo")
If Err.number <> 0 Then
' Object couldn't be created
' Log error
Reporter.ReportEvent micFail, “Error in Foo”, Err.Description & “ – “ &
apgSeverityError & “ – “ & Err.Number
Else
' Use objFoo somehow
...
End If
' Reset error handling
On Error Goto 0
Mercury QuickTest Professional Scripting Guide
Proprietary and confidential to Mercury Interactive Corporation unless otherwise noted.
Page 19
4.2.2 Handling Exceptions
Sometimes, you may not be able to predict where in the script an error or application
exception might occur. Therefore adding error-handling statements into the script is
not useful.
For these cases, you might choose to use recovery scenarios to catch and handle the
unpredictable/unexpected run exception.
You have to make you can understand the difference between a run time error and an
exception; while a run time error can be predicted e.g. an expected error message
dialog when a bad password is entered, an exception cannot be predicted and could
come from the application under test or from external sources e.g. “You have mail”
message or “Printer out of paper” message.
The recovery Scenario Manager allows the tester to create and manage recovery
scenarios to identify and take appropriate action when an exception occurs. You
ONLY use a recovery scenario for exceptional or unpredictable events, you should
handle expected errors (for example during a negative testing execution) and
predictable events directly in the test.
The Recovery Scenario Wizard leads you, step-by-step, through the process of
creating a recovery scenario. The Recovery Scenario Wizard contains five main steps:
1. defining the trigger event that interrupts the run session
2. specifying the recovery operation(s) required to continue
3. choosing a post-recovery test run operation
4. specifying a name and description for the recovery scenario
5. (for tests) specifying whether to associate the recovery scenario to the current
test and/or to all new tests
Mercury QuickTest Professional Scripting Guide
Proprietary and confidential to Mercury Interactive Corporation unless otherwise noted.
Page 20
5. Result Logging
When a run session ends, you can view the run session results in the Test Results
window. By default, the Test Results window opens automatically at the end of a run.
If you want to change this behavior, clear the View results when run session ends
check box in the Run tab of the Options dialog box.
The Test Results window contains a description of the steps performed during the run
session. For a test that does not contain Data Table parameters, the Test Results
window shows a single test iteration.
If the test contains Data Table parameters, and the test settings are configured to run
multiple iterations, the Test Results window displays details for each iteration of the
test run. The results are grouped by the actions in the test.
You can define a message that QuickTest sends to your test results. For example,
suppose you want to check that a user name edit box exists in the application under
test. If the edit box exists, then a username is entered. Otherwise, QuickTest sends a
message to the test results indicating that the object is absent.
You can use the Reporter Utility object in your script to send a message to the Test
Results:
Associated Methods
· ReportEvent Method
Associated Properties
· Filter Property
· ReportPath Property
· RunStatus Property
Mercury QuickTest Professional Scripting Guide
Proprietary and confidential to Mercury Interactive Corporation unless otherwise noted.
Page 21
The ReportEvent method reports an event to the Test Results, making sure that the
result is logged with the internal logging of the automation tool instead of sending it
to an external file.
The first argument of the method will determine the status of the report message (be
aware that this can either pass or fail the entire action and therefore the entire test, so
if a single step in your script fails, the entire test will fail impacting on the
requirement that has been linked to the test in the requirement coverage) , and this can
be one of the following:
· 0 or micPass: Causes the status of this step to be passed and sends the
specified message to the report.
· 1 or micFail: Causes the status of this step to be failed and sends the specified
message to the report. When this step runs, the test fails.
· 2 or micDone: Sends a message to the report without affecting the pass/fail
status of the test.
· 3 or micWarning: Sends a warning message to the report, but does not cause
the test to stop running, and does not affect the pass/fail status of the test.
The following examples use the ReportEvent method to report a failed step.
Reporter.ReportEvent 1, "Custom Step", "The user-defined step failed."
or
Reporter.ReportEvent micFail, "Custom Step", "The user-defined step failed."
You can use this property to completely disable or enable reporting of steps following
the statement, or you can indicate that you only want subsequent failed or failed and
warning steps to be included in the report.
The options are:
· RfEnableAll
· RfEnableErrorsAndWarnings
· RfEnableErrorsOnly
· rfDisableAll
Reporter.ReportEvent micGeneral, "1", ""
Reporter.ReportEvent micGeneral, "2", ""
Reporter.Filter = rfDisableAll
Reporter.ReportEvent micGeneral, "3", ""
Reporter.ReportEvent micGeneral, "4", ""
Reporter.Filter = rfEnableAll
Reporter.ReportEvent micGeneral, "5", ""
Reporter.ReportEvent micGeneral, "6", ""
You can also use the RunStatus property to retrieve the run status at the current point
of the run session:
If Reporter.RunStatus = micFail Then ExitAction
Mercury QuickTest Professional Scripting Guide
Proprietary and confidential to Mercury Interactive Corporation unless otherwise noted.
Page 22
6. Version Control
Version Control of the QuickTest Professional scripts should be established using ClearCase
tool, and the integration between ClearCase and TestDirector, which is a separate add-in.
Mercury QuickTest Professional Scripting Guide
Proprietary and confidential to Mercury Interactive Corporation unless otherwise noted.
Page 23
Appendix A: Error Codes
When an error occurs, it can be a QuickTest operation error or a VBScript error
QuickTest Run Operation Errors
The Run Error message box displayed during a run session offers a number of buttons
for dealing with errors encountered:
The message provides all the necessary information about the error, and the
information is also sent to the Test Resuls.
If an operation fail the user can retrieve the error code at run time using the
GetLastError built-in function.
Dialog(“Login”).WinEdit(“AgentName”).Set “1234”
LastErr = GetLastError()
Msgbox “Error number: ” & LastErr & “ – Description: “ & DescribeResult(LastErr)
VBScript Errors
Error codes will help you find solutions to problems you may encounter while
creating or running components. Quicktest Professional uses VBScript as scripting
language, therefore the errors that can be triggered are VBScript Error codes.
Run Time Errors:
Error Number Description
429 ActiveX component can't create object
507 An exception occurred
449 Argument not optional
17 Can't perform requested operation
430 Class doesn't support Automation
506 Class not defined
11 Division by zero
48 Error in loading DLL
5020 Expected ')' in regular expression
5019 Expected ']' in regular expression
432 File name or class name not found during Automation operation
Mercury QuickTest Professional Scripting Guide
Proprietary and confidential to Mercury Interactive Corporation unless otherwise noted.
Page 24
92 For loop not initialized
5008 Illegal assignment
51 Internal error
505 Invalid or unqualified reference
481 Invalid picture
5 Invalid procedure call or argument
5021 Invalid range in character set
94 Invalid use of Null
448 Named argument not found
447 Object doesn't support current locale setting
445 Object doesn't support this action
438 Object doesn't support this property or method
451 Object not a collection
504 Object not safe for creating
503 Object not safe for initializing
502 Object not safe for scripting
424 Object required
91 Object variable not set
7 Out of Memory
28 Out of stack space
14 Out of string space
6 Overflow
35 Sub or function not defined
9 Subscript out of range
5017 Syntax error in regular expression
462 The remote server machine does not exist or is unavailable
10 This array is fixed or temporarily locked
13 Type mismatch
5018 Unexpected quantifier
500 Variable is undefined
458 Variable uses an Automation type not supported in VBScript
450 Wrong number of arguments or invalid property assignment
Syntax Errors:
Error Number Description
1052 Cannot have multiple default property/method in a Class
1044 Cannot use parentheses when calling a Sub
1053 Class initialize or terminate do not have arguments
1058 'Default' specification can only be on Property Get
1057 'Default' specification must also specify 'Public'
Mercury QuickTest Professional Scripting Guide
Proprietary and confidential to Mercury Interactive Corporation unless otherwise noted.
Page 25
1005 Expected '('
1006 Expected ')'
1011 Expected '='
1021 Expected 'Case'
1047 Expected 'Class'
1025 Expected end of statement
1014 Expected 'End'
1023 Expected expression
1015 Expected 'Function'
1010 Expected identifier
1012 Expected 'If'
1046 Expected 'In'
1026 Expected integer constant
1049 Expected Let or Set or Get in property declaration
1045 Expected literal constant
1019 Expected 'Loop'
1020 Expected 'Next'
1050 Expected 'Property'
1022 Expected 'Select'
1024 Expected statement
1016 Expected 'Sub'
1017 Expected 'Then'
1013 Expected 'To'
1018 Expected 'Wend'
1027 Expected 'While' or 'Until'
1028 Expected 'While,' 'Until,' or end of statement
1029 Expected 'With'
1030 Identifier too long
1014 Invalid character
1039 Invalid 'exit' statement
1040 Invalid 'for' loop control variable
1013 Invalid number
1037 Invalid use of 'Me' keyword
1038 'loop' without 'do'
1048 Must be defined inside a Class
1042 Must be first statement on the line
1041 Name redefined
1051 Number of arguments must be consistent across properties
specification
1001 Out of Memory
1054 Property Set or Let must have at least one argument
Mercury QuickTest Professional Scripting Guide
Proprietary and confidential to Mercury Interactive Corporation unless otherwise noted.
Page 26
1002 Syntax error
1055 Unexpected 'Next'
1015 Unterminated string constant
Mercury QuickTest Professional Scripting Guide
Proprietary and confidential to Mercury Interactive Corporation unless otherwise noted.
Page 27
Appendix B: Tesmplate for test script header
An example of the comment block is as follows:
‘’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’

‘ SUMMARY: The test checks login procedure

‘ DESCRIPTION: The test checks that login is performed
‘ when correct parameters are used


‘ RETURN CODE: standard

‘ APPLICATION: Flight

‘ NOTES: The script requires DSN”app” to be set on the
‘ machines

‘ AUTHOR: Brain Tester. 29/02/02
‘ UPDATED: Black Jack. 01/01/04

‘’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’’
Mercury QuickTest Professional Scripting Guide
Proprietary and confidential to Mercury Interactive Corporation unless otherwise noted.
Page 28
Appendix C: VBScript Useful Objects
WScript.Shell
This object can be used to replay key sequences when the QuickTest standard
functions do not work. Many other things can be done with this object.
Set WshShell = CreateObject("WScript.Shell")
Dim path, lang
path = "HKLM\SOFTWARE\Microsoft\Windows
NT\CurrentVersion\WOW\boot.description\language.dll"
lang = WshShell.RegRead(path)
msgbox lang
ADODB.Connection
This object can be used to connect to a database and query it.
Dim flightnumber
Dim dbexample
' Create the connection object.
Set dbexample = CreateObject("ADODB.Connection")
' Set the connection string and open the connection
dbexample.ConnectionString =
"DBQ=D:\app\flight32.mdb;DefaultDir=D:\app;Driver={Microsoft Access Driver
(*.mdb)};DriverId=281;FIL=MS
Access;MaxBufferSize=2048;MaxScanRows=8;PageTimeout=5;SafeTransactions=0
;Threads=3;UserCommitSync=Yes;"
dbexample.Open
' or use this method if a DSN entry was created.
'dbexample.Open("DSN=Flight32")
flightnumber = 6195
' Get the recordset returned from a select query.
Set recordset = dbexample.Execute("SELECT * from Orders WHERE Flight_Number
= " & flightnumber)
' Display the results of the query.
msgbox recordset.GetString
' Close the database connection.
dbexample.Close
Set dbexample = Nothing
QuickTest.Application
This object is QuickTest itself. This can be used to parameterize QuickTest before
running a test or while running a test. For further information, see the help from Help
> QuickTest Automation Object Model Reference.
Dim qtApp
Mercury QuickTest Professional Scripting Guide
Proprietary and confidential to Mercury Interactive Corporation unless otherwise noted.
Page 29
Set qtApp = CreateObject("QuickTest.Application")
qtApp.Launch 'Start QuickTest
qtApp.Visible = True 'Make the QuickTest application visible
Set qtTest = qtApp.Test
qtTest.Run
InternetExplorer.Application
This object can be used to create an Internet Explorer object.
Set IE = CreateObject("InternetExplorer.Application")
IE.Visible = True
IE.Navigate "http://cso-intranet/main.xml"
IE.FullScreen = True
IE.MenuBar = False
IE.StatusBar = False
IE.ToolBar = False
Wait 3
MsgBox IE.Name
IE.Quit
Mercury.DeviceReplay
This object can be used to replay user actions (keyboard or mouse). This is a replay
solution when nothing is recorded and when SendKeys does not work.
Set oDev = CreateObject("Mercury.DeviceReplay")
oDev.MouseClick absx + 5, absy + 5, CLng(Btn)
Set oDev = Nothing
Scripting.FileSystemObject
This object can be used to manipulate files in the filesystem.
Public Function CompareFiles (FilePath1, FilePath2)
Dim FS, File1, File2
Set FS = CreateObject("Scripting.FileSystemObject")
If FS.GetFile(FilePath1).Size <> FS.GetFile(FilePath2).Size Then
CompareFiles = True
Exit Function
End If
Set File1 = FS.GetFile(FilePath1).OpenAsTextStream(1, 0)
Set File2 = FS.GetFile(FilePath2).OpenAsTextStream(1, 0)
CompareFiles = False
Do While File1.AtEndOfStream = False
Str1 = File1.Read(1000)
Str2 = File2.Read(1000)
CompareFiles = StrComp(Str1, Str2, 0)
Mercury QuickTest Professional Scripting Guide
Proprietary and confidential to Mercury Interactive Corporation unless otherwise noted.
Page 30
If CompareFiles <> 0 Then
CompareFiles = True
Exit Do
End If
Loop
File1.Close()
File2.Close()
End Function
' Example of use:
File1 = "C:\countries\apple1.jpg"
File2 = "C:\countries\apple3.jpg"
If CompareFiles(File1, File2) = False Then
MsgBox "Files are identical."
Else
MsgBox "Files are different."
End If
WinHttp.WinHttpRequest.5.1
This is an object for managing the HTTP protocol.
Public Function DownloadURLToLocal(URL, Target)
Set FSO = CreateObject("Scripting.FileSystemObject")
Set FileObj = FSO.OpenTextFile(Target, 2, True) ' Write mode
Set HTTP = CreateObject("WinHttp.WinHttpRequest.5.1")
HTTP.Open "GET", URL, False
HTTP.Send
FileObj.Write HTTP.ResponseText
FileObj.Close
Set HTTP = Nothing
Set FileObj = Nothing
Set FSO = Nothing
End Function
DownloadURLToLocal "http://cso-intranet/Mercury-General/examples/rpcrouter",
"C:\rpcrouter3.xml"
'XMLFile("rpcrouter.xml").Check CheckPoint("rpcrouter.xml_2")

Certified Tester

Certified Tester
Foundation Level Syllabus
Version 2007

International Software Testing Qualifications Board
Certified Tester
Foundation Level Syllabus
Version 2007 Page 2 of 76 12-Apr-2007
© International Software Testing Qualifications Board
Copyright © 2007 the authors for the update 2007 (Thomas Müller (chair), Dorothy Graham, Debra
Friedenberg and Erik van Veendendal)
Copyright © 2005, the authors (Thomas Müller (chair), Rex Black, Sigrid Eldh, Dorothy Graham,
Klaus Olsen, Maaret Pyhäjärvi, Geoff Thompson and Erik van Veendendal).
All rights reserved
The authors are transferring the copyright to the International Software Testing Qualifications Board
(ISTQB). The authors (as current copyright holders) and ISTQB (as the future copyright holder)
have agreed to the following conditions of use:
1) Any individual or training company may use this syllabus as the basis for a training course if the
authors and the ISTQB are acknowledged as the source and copyright owners of the syllabus
and provided that any advertisement of such a training course may mention the syllabus only
after submission for official accreditation of the training materials to an ISTQB-recognized
National Board.
2) Any individual or group of individuals may use this syllabus as the basis for articles, books, or
other derivative writings if the authors and the ISTQB are acknowledged as the source and
copyright owners of the syllabus.
3) Any ISTQB-recognized National Board may translate this syllabus and license the syllabus (or
its translation) to other parties.
Certified Tester
Foundation Level Syllabus
Version 2007 Page 3 of 76 12-Apr-2007
© International Software Testing Qualifications Board
Revision History
Version Date Remarks
ISTQB 2007 01-May-2007 Certified Tester Foundation Level Syllabus
Maintenance Release – see Appendix E – Release
Notes Syllabus 2007
ISTQB 2005 01-July-2005 Certified Tester Foundation Level Syllabus
ASQF V2.2 July-2003 ASQF Syllabus Foundation Level Version 2.2
“Lehrplan
„Grundlagen des Softwaretestens“
ISEB V2.0 25-Feb-1999 ISEB Software Testing Foundation Syllabus V2.0
25 February 1999
Certified Tester
Foundation Level Syllabus
Version 2007 Page 4 of 76 12-Apr-2007
© International Software Testing Qualifications Board
Table of Contents
Acknowledgements.............................................................................................................................. 7
Introduction to this syllabus................................................................................................................... 8
Purpose of this document ................................................................................................................. 8
The Certified Tester Foundation Level in Software Testing .............................................................. 8
Learning objectives/level of knowledge............................................................................................. 8
The examination ............................................................................................................................... 8
Accreditation.................................................................................................................................... 8
Level of detail .................................................................................................................................. 9
How this syllabus is organized.......................................................................................................... 9
1. Fundamentals of testing (K2)....................................................................................................... 10
1.1 Why is testing necessary (K2) ............................................................................................... 11
1.1.1 Software systems context (K1)....................................................................................... 11
1.1.2 Causes of software defects (K2) .................................................................................... 11
1.1.3 Role of testing in software development, maintenance and operations (K2) .................. 11
1.1.4 Testing and quality (K2) ................................................................................................. 11
1.1.5 How much testing is enough? (K2)................................................................................. 12
1.2 What is testing? (K2) ............................................................................................................. 13
1.3 General testing principles (K2) .............................................................................................. 14
1.4 Fundamental test process (K1).............................................................................................. 15
1.4.1 Test planning and control (K1) ....................................................................................... 15
1.4.2 Test analysis and design (K1) ........................................................................................ 15
1.4.3 Test implementation and execution (K1) ........................................................................ 15
1.4.4 Evaluating exit criteria and reporting (K1)....................................................................... 16
1.4.5 Test closure activities (K1) ............................................................................................. 16
1.5 The psychology of testing (K2) .............................................................................................. 17
2. Testing throughout the software life cycle (K2) ............................................................................ 19
2.1 Software development models (K2)....................................................................................... 20
2.1.1 V-model (sequential development model) (K2) .............................................................. 20
2.1.2 Iterative-incremental development models (K2) ............................................................. 20
2.1.3 Testing within a life cycle model (K2) ............................................................................. 20
2.2 Test levels (K2) ..................................................................................................................... 22
2.2.1 Component testing (K2) ................................................................................................. 22
2.2.2 Integration testing (K2) ................................................................................................... 22
2.2.3 System testing (K2) ........................................................................................................ 23
2.2.4 Acceptance testing (K2) ................................................................................................. 23
2.3 Test types (K2) ...................................................................................................................... 25
2.3.1 Testing of function (functional testing) (K2) .................................................................... 25
2.3.2 Testing of non-functional software characteristics (non-functional testing) (K2) ............. 25
2.3.3 Testing of software structure/architecture (structural testing) (K2).................................. 26
2.3.4 Testing related to changes (confirmation testing (retesting) and regression testing) (K2)26
2.4 Maintenance testing (K2)....................................................................................................... 27
3. Static techniques (K2).................................................................................................................. 28
3.1 Static techniques and the test process (K2) ......................................................................... 29
3.2 Review process (K2) ............................................................................................................. 30
3.2.1 Phases of a formal review (K1) ...................................................................................... 30
3.2.2 Roles and responsibilities (K1) ....................................................................................... 30
3.2.3 Types of review (K2) ...................................................................................................... 31
3.2.4 Success factors for reviews (K2) .................................................................................... 32
3.3 Static analysis by tools (K2) .................................................................................................. 33
4. Test design techniques (K3) ........................................................................................................ 34
4.1 The TEST DEVELOPMENT PROCESS (K2).................................................................................. 36
4.2 Categories of test design techniques (K2) ............................................................................. 37
Certified Tester
Foundation Level Syllabus
Version 2007 Page 5 of 76 12-Apr-2007
© International Software Testing Qualifications Board
4.3 Specification-based or black-box techniques (K3) ................................................................. 38
4.3.1 Equivalence partitioning (K3).......................................................................................... 38
4.3.2 Boundary value analysis (K3)......................................................................................... 38
4.3.3 Decision table testing (K3) ............................................................................................. 38
4.3.4 State transition testing (K3) ............................................................................................ 39
4.3.5 Use case testing (K2) ..................................................................................................... 39
4.4 Structure-based or white-box techniques (K3)...................................................................... 40
4.4.1 Statement testing and coverage (K3) ............................................................................. 40
4.4.2 Decision testing and coverage (K3)................................................................................ 40
4.4.3 Other structure-based techniques (K1)........................................................................... 40
4.5 Experience-based techniques (K2)........................................................................................ 41
4.6 Choosing test techniques (K2)............................................................................................... 42
5. Test management (K3) ................................................................................................................ 43
5.1 Test organization (K2) ........................................................................................................... 45
5.1.1 Test organization and independence (K2)...................................................................... 45
5.1.2 Tasks of the test leader and tester (K1).......................................................................... 45
5.2 Test planning and estimation (K2) ......................................................................................... 47
5.2.1 Test planning (K2).......................................................................................................... 47
5.2.2 Test planning activities (K2) ........................................................................................... 47
5.2.3 Exit criteria (K2).............................................................................................................. 47
5.2.4 Test estimation (K2) ....................................................................................................... 48
5.2.5 Test approaches (test strategies) (K2) ........................................................................... 48
5.3 Test progress monitoring and control (K2)............................................................................. 49
5.3.1 Test progress monitoring (K1) ........................................................................................ 49
5.3.2 Test Reporting (K2) ........................................................................................................ 49
5.3.3 Test control (K2)............................................................................................................. 49
5.4 Configuration management (K2)............................................................................................ 51
5.5 Risk and testing (K2) ............................................................................................................. 52
5.5.1 Project risks (K2)............................................................................................................ 52
5.5.2 Product risks (K2)........................................................................................................... 52
5.6 Incident management (K3) .................................................................................................... 54
6. Tool support for testing (K2) ........................................................................................................ 56
6.1 Types of test tool (K2) ........................................................................................................... 57
6.1.1 Test tool classification (K2) ............................................................................................ 57
6.1.2 Tool support for management of testing and tests (K1).................................................. 57
6.1.3 Tool support for static testing (K1).................................................................................. 58
6.1.4 Tool support for test specification (K1) ........................................................................... 59
6.1.5 Tool support for test execution and logging (K1) ............................................................ 59
6.1.6 Tool support for performance and monitoring (K1) ......................................................... 60
6.1.7 Tool support for specific application areas (K1).............................................................. 60
6.1.8 Tool support using other tools (K1) ................................................................................ 61
6.2 Effective use of tools: potential benefits and risks (K2).......................................................... 62
6.2.1 Potential benefits and risks of tool support for testing (for all tools) (K2) ........................ 62
6.2.2 Special considerations for some types of tool (K1)......................................................... 62
6.3 Introducing a tool into an organization (K1) ........................................................................... 64
7. References ................................................................................................................................. 65
Standards ...................................................................................................................................... 65
Books ............................................................................................................................................ 65
8. Appendix A – Syllabus background ............................................................................................. 67
History of this document ................................................................................................................. 67
Objectives of the Foundation Certificate qualification ..................................................................... 67
Objectives of the international qualification (adapted from ISTQB meeting at Sollentuna,
November 2001)............................................................................................................................ 67
Entry requirements for this qualification .......................................................................................... 67
Certified Tester
Foundation Level Syllabus
Version 2007 Page 6 of 76 12-Apr-2007
© International Software Testing Qualifications Board
Background and history of the Foundation Certificate in Software Testing ..................................... 68
9. Appendix B – Learning objectives/level of knowledge ................................................................. 69
Level 1: Remember (K1)................................................................................................................. 69
Level 2: Understand (K2)................................................................................................................ 69
Level 3: Apply (K3) ......................................................................................................................... 69
10. Appendix C – Rules applied to the ISTQB................................................................................. 70
Foundation syllabus....................................................................................................................... 70
General rules............................................................................................................................. 70
Current content.......................................................................................................................... 70
Learning Objectives.................................................................................................................... 70
Overall structure........................................................................................................................ 70
11. Appendix D – Notice to training providers.................................................................................. 72
12. Appendix E – Release Notes Syllabus 2007.............................................................................. 73
13. Index......................................................................................................................................... 74
Certified Tester
Foundation Level Syllabus
Version 2007 Page 7 of 76 12-Apr-2007
© International Software Testing Qualifications Board
Acknowledgements
International Software Testing Qualifications Board Working Party Foundation Level (Edition 2007):
Thomas Müller (chair), Dorothy Graham, Debra Friedenberg, and Erik van Veendendal. The core
team thanks the review team (Hans Schaefer, Stephanie Ulrich, Meile Posthuma, Anders
Pettersson, and Wonil Kwon) and all national boards for the suggestions to the current version of
the syllabus.
International Software Testing Qualifications Board Working Party Foundation Level (Edition 2005):
Thomas Müller (chair), Rex Black, Sigrid Eldh, Dorothy Graham, Klaus Olsen, Maaret Pyhäjärvi,
Geoff Thompson and Erik van Veendendal. The core team thanks the review team and all national
boards for the suggestions to the current syllabus.
Particular thanks to: (Denmark) Klaus Olsen, Christine Rosenbeck-Larsen, (Germany) Matthias
Daigl, Uwe Hehn, Tilo Linz, Horst Pohlmann, Ina Schieferdecker, Sabine Uhde, Stephanie Ulrich,
(Netherlands) Meile Posthuma (India) Vipul Kocher, (Israel) Shmuel Knishinsky, Ester Zabar,
(Sweden) Anders Claesson, Mattias Nordin, Ingvar Nordström, Stefan Ohlsson, Kennet Osbjer,
Ingela Skytte, Klaus Zeuge, (Switzerland) Armin Born, Silvio Moser, Reto Müller, Joerg Pietzsch,
(UK) Aran Ebbett, Isabel Evans, Julie Gardiner, Andrew Goslin, Brian Hambling, James Lyndsay,
Helen Moore, Peter Morgan, Trevor Newton, Angelina Samaroo, Shane Saunders, Mike Smith,
Richard Taylor, Neil Thompson, Pete Williams, (US) Jon D Hagar, Dale Perry.
Certified Tester
Foundation Level Syllabus
Version 2007 Page 8 of 76 12-Apr-2007
© International Software Testing Qualifications Board
Introduction to this syllabus
Purpose of this document
This syllabus forms the basis for the International Software Testing Qualification at the Foundation
Level. The International Software Testing Qualifications Board (ISTQB) provides it to the national
examination bodies for them to accredit the training providers and to derive examination questions
in their local language. Training providers will produce courseware and determine appropriate
teaching methods for accreditation, and the syllabus will help candidates in their preparation for the
examination.
Information on the history and background of the syllabus can be found in Appendix A.
The Certified Tester Foundation Level in Software Testing
The Foundation Level qualification is aimed at anyone involved in software testing. This includes
people in roles such as testers, test analysts, test engineers, test consultants, test managers, user
acceptance testers and software developers. This Foundation Level qualification is also appropriate
for anyone who wants a basic understanding of software testing, such as project managers, quality
managers, software development managers, business analysts, IT directors and management
consultants. Holders of the Foundation Certificate will be able to go on to a higher level software
testing qualification.
Learning objectives/level of knowledge
Cognitive levels are given for each section in this syllabus:
o K1: remember, recognize, recall;
o K2: understand, explain, give reasons, compare, classify, categorize, give examples,
summarize;
o K3: apply, use.
Further details and examples of learning objectives are given in Appendix B.
All terms listed under “Terms” just below chapter headings shall be remembered (K1), even if not
explicitly mentioned in the learning objectives.
The examination
The Foundation Certificate examination will be based on this syllabus. Answers to examination
questions may require the use of material based on more than one section of this syllabus. All
sections of the syllabus are examinable.
The format of the examination is multiple choice.
Exams may be taken as part of an accredited training course or taken independently (e.g. at an
examination centre).
Accreditation
Training providers whose course material follows this syllabus may be accredited by a national
board recognized by ISTQB. Accreditation guidelines should be obtained from the board or body
that performs the accreditation. An accredited course is recognized as conforming to this syllabus,
and is allowed to have an ISTQB examination as part of the course.
Further guidance for training providers is given in Appendix D.
Certified Tester
Foundation Level Syllabus
Version 2007 Page 9 of 76 12-Apr-2007
© International Software Testing Qualifications Board
Level of detail
The level of detail in this syllabus allows internationally consistent teaching and examination. In
order to achieve this goal, the syllabus consists of:
o General instructional objectives describing the intention of the foundation level.
o A list of information to teach, including a description, and references to additional sources if
required.
o Learning objectives for each knowledge area, describing the cognitive learning outcome and
mindset to be achieved.
o A list of terms that students must be able to recall and have understood.
o A description of the key concepts to teach, including sources such as accepted literature or
standards.
The syllabus content is not a description of the entire knowledge area of software testing; it reflects
the level of detail to be covered in foundation level training courses.
How this syllabus is organized
There are six major chapters. The top level heading shows the levels of learning objectives that are
covered within the chapter, and specifies the time for the chapter. For example:
2. Testing throughout the software life cycle
(K2)
115
minutes
shows that Chapter 2 has learning objectives of K1 (assumed when a higher level is shown) and K2
(but not K3), and is intended to take 115 minutes to teach the material in the chapter. Within each
chapter there are a number of sections. Each section also has the learning objectives and the
amount of time required. Subsections that do not have a time given are included within the time for
the section.
Certified Tester
Foundation Level Syllabus
Version 2007 Page 10 of 76 12-Apr-2007
© International Software Testing Qualifications Board
1. Fundamentals of testing (K2) 155 minutes
Learning objectives for fundamentals of testing
The objectives identify what you will be able to do following the completion of each module.
1.1 Why is testing necessary? (K2)
LO-1.1.1 Describe, with examples, the way in which a defect in software can cause harm to a
person, to the environment or to a company. (K2)
LO-1.1.2 Distinguish between the root cause of a defect and its effects. (K2)
LO-1.1.3 Give reasons why testing is necessary by giving examples. (K2)
LO-1.1.4 Describe why testing is part of quality assurance and give examples of how testing
contributes to higher quality. (K2)
LO-1.1.5 Recall the terms error, defect, fault, failure and corresponding terms mistake and bug.
(K1)
1.2 What is testing? (K2)
LO-1.2.1 Recall the common objectives of testing. (K1)
LO-1.2.2 Describe the purpose of testing in software development, maintenance and operations
as a means to find defects, provide confidence and information, and prevent defects.
(K2)
1.3 General testing principles (K2)
LO-1.3.1 Explain the fundamental principles in testing. (K2)
1.4 Fundamental test process (K1)
LO-1.4.1 Recall the fundamental test activities from planning to test closure activities and the
main tasks of each test activity. (K1)
1.5 The psychology of testing (K2)
LO-1.5.1 Recall that the success of testing is influenced by psychological factors (K1):
o clear test objectives determine testers’ effectiveness;
o blindness to one’s own errors;
o courteous communication and feedback on defects.
LO-1.5.2 Contrast the mindset of a tester and of a developer. (K2)
Certified Tester
Foundation Level Syllabus
Version 2007 Page 11 of 76 12-Apr-2007
© International Software Testing Qualifications Board
1.1 Why is testing necessary (K2) 20 minutes
Terms
Bug, defect, error, failure, fault, mistake, quality, risk.
1.1.1 Software systems context (K1)
Software systems are an increasing part of life, from business applications (e.g. banking) to
consumer products (e.g. cars). Most people have had an experience with software that did not work
as expected. Software that does not work correctly can lead to many problems, including loss of
money, time or business reputation, and could even cause injury or death.
1.1.2 Causes of software defects (K2)
A human being can make an error (mistake), which produces a defect (fault, bug) in the code, in
software or a system, or in a document. If a defect in code is executed, the system will fail to do
what it should do (or do something it shouldn’t), causing a failure. Defects in software, systems or
documents may result in failures, but not all defects do so.
Defects occur because human beings are fallible and because there is time pressure, complex
code, complexity of infrastructure, changed technologies, and/or many system interactions.
Failures can be caused by environmental conditions as well: radiation, magnetism, electronic fields,
and pollution can cause faults in firmware or influence the execution of software by changing
hardware conditions.
1.1.3 Role of testing in software development, maintenance and operations
(K2)
Rigorous testing of systems and documentation can help to reduce the risk of problems occurring
during operation and contribute to the quality of the software system, if defects found are corrected
before the system is released for operational use.
Software testing may also be required to meet contractual or legal requirements, or industry-specific
standards.
1.1.4 Testing and quality (K2)
With the help of testing, it is possible to measure the quality of software in terms of defects found,
for both functional and non-functional software requirements and characteristics (e.g. reliability,
usability, efficiency, maintainability and portability). For more information on non-functional testing
see Chapter 2; for more information on software characteristics see ‘Software Engineering –
Software Product Quality’ (ISO 9126).
Testing can give confidence in the quality of the software if it finds few or no defects. A properly
designed test that passes reduces the overall level of risk in a system. When testing does find
defects, the quality of the software system increases when those defects are fixed.
Lessons should be learned from previous projects. By understanding the root causes of defects
found in other projects, processes can be improved, which in turn should prevent those defects from
reoccurring and, as a consequence, improve the quality of future systems. This is an aspect of
quality assurance.
Certified Tester
Foundation Level Syllabus
Version 2007 Page 12 of 76 12-Apr-2007
© International Software Testing Qualifications Board
Testing should be integrated as one of the quality assurance activities (i.e. alongside development
standards, training and defect analysis).
1.1.5 How much testing is enough? (K2)
Deciding how much testing is enough should take account of the level of risk, including technical
and business product and project risks, and project constraints such as time and budget. (Risk is
discussed further in Chapter 5.)
Testing should provide sufficient information to stakeholders to make informed decisions about the
release of the software or system being tested, for the next development step or handover to
customers.
Certified Tester
Foundation Level Syllabus
Version 2007 Page 13 of 76 12-Apr-2007
© International Software Testing Qualifications Board
1.2 What is testing? (K2) 30 minutes
Terms
Debugging, requirement, review, test case, testing, test objective..
Background
A common perception of testing is that it only consists of running tests, i.e. executing the software.
This is part of testing, but not all of the testing activities.
Test activities exist before and after test execution: activities such as planning and control, choosing
test conditions, designing test cases and checking results, evaluating exit criteria, reporting on the
testing process and system under test, and finalizing or closure (e.g. after a test phase has been
completed). Testing also includes reviewing of documents (including source code) and static
analysis.
Both dynamic testing and static testing can be used as a means for achieving similar objectives,
and will provide information in order to improve both the system to be tested, and the development
and testing processes.
There can be different test objectives:
o finding defects;
o gaining confidence about the level of quality and providing information;
o preventing defects.
The thought process of designing tests early in the life cycle (verifying the test basis via test design)
can help to prevent defects from being introduced into code. Reviews of documents (e.g.
requirements) also help to prevent defects appearing in the code.
Different viewpoints in testing take different objectives into account. For example, in development
testing (e.g. component, integration and system testing), the main objective may be to cause as
many failures as possible so that defects in the software are identified and can be fixed. In
acceptance testing, the main objective may be to confirm that the system works as expected, to
gain confidence that it has met the requirements. In some cases the main objective of testing may
be to assess the quality of the software (with no intention of fixing defects), to give information to
stakeholders of the risk of releasing the system at a given time. Maintenance testing often includes
testing that no new defects have been introduced during development of the changes. During
operational testing, the main objective may be to assess system characteristics such as reliability or
availability.
Debugging and testing are different. Testing can show failures that are caused by defects.
Debugging is the development activity that identifies the cause of a defect, repairs the code and
checks that the defect has been fixed correctly. Subsequent confirmation testing by a tester ensures
that the fix does indeed resolve the failure. The responsibility for each activity is very different, i.e.
testers test and developers debug.
The process of testing and its activities is explained in Section 1.4.
Certified Tester
Foundation Level Syllabus
Version 2007 Page 14 of 76 12-Apr-2007
© International Software Testing Qualifications Board
1.3 General testing principles (K2) 35 minutes
Terms
Exhaustive testing.
Principles
A number of testing principles have been suggested over the past 40 years and offer general
guidelines common for all testing.
Principle 1 – Testing shows presence of defects
Testing can show that defects are present, but cannot prove that there are no defects. Testing
reduces the probability of undiscovered defects remaining in the software but, even if no defects are
found, it is not a proof of correctness.
Principle 2 – Exhaustive testing is impossible
Testing everything (all combinations of inputs and preconditions) is not feasible except for trivial
cases. Instead of exhaustive testing, risk analysis and priorities should be used to focus testing
efforts.
Principle 3 – Early testing
Testing activities should start as early as possible in the software or system development life cycle,
and should be focused on defined objectives.
Principle 4 – Defect clustering
A small number of modules contain most of the defects discovered during pre-release testing, or
are responsible for the most operational failures.
Principle 5 – Pesticide paradox
If the same tests are repeated over and over again, eventually the same set of test cases will no
longer find any new defects. To overcome this “pesticide paradox”, the test cases need to be
regularly reviewed and revised, and new and different tests need to be written to exercise different
parts of the software or system to potentially find more defects.
Principle 6 – Testing is context dependent
Testing is done differently in different contexts. For example, safety-critical software is tested
differently from an e-commerce site.
Principle 7 – Absence-of-errors fallacy
Finding and fixing defects does not help if the system built is unusable and does not fulfill the users’
needs and expectations.
Certified Tester
Foundation Level Syllabus
Version 2007 Page 15 of 76 12-Apr-2007
© International Software Testing Qualifications Board
1.4 Fundamental test process (K1) 35 minutes
Terms
Confirmation testing, retesting, exit criteria, incident, regression testing, test basis, test condition,
test coverage, test data, test execution, test log, test plan, test procedure, test policy, test strategy,
test suite, test summary report, testware.
Background
The most visible part of testing is executing tests. But to be effective and efficient, test plans should
also include time to be spent on planning the tests, designing test cases, preparing for execution
and evaluating status.
The fundamental test process consists of the following main activities:
o planning and control;
o analysis and design;
o implementation and execution;
o evaluating exit criteria and reporting;
o test closure activities.
Although logically sequential, the activities in the process may overlap or take place concurrently.
1.4.1 Test planning and control (K1)
Test planning is the activity of verifying the mission of testing, defining the objectives of testing and
the specification of test activities in order to meet the objectives and mission.
Test control is the ongoing activity of comparing actual progress against the plan, and reporting the
status, including deviations from the plan. It involves taking actions necessary to meet the mission
and objectives of the project. In order to control testing, it should be monitored throughout the
project. Test planning takes into account the feedback from monitoring and control activities.
Test planning and control tasks are defined in Chapter 5 of this syllabus.
1.4.2 Test analysis and design (K1)
Test analysis and design is the activity where general testing objectives are transformed into
tangible test conditions and test cases.
Test analysis and design has the following major tasks:
o Reviewing the test basis (such as requirements, architecture, design, interfaces).
o Evaluating testability of the test basis and test objects.
o Identifying and prioritizing test conditions based on analysis of test items, the specification,
behaviour and structure.
o Designing and prioritizing test cases.
o Identifying necessary test data to support the test conditions and test cases.
o Designing the test environment set-up and identifying any required infrastructure and tools.
1.4.3 Test implementation and execution (K1)
Test implementation and execution is the activity where test procedures or scripts are specified by
combining the test cases in a particular order and including any other information needed for test
execution, the environment is set up and the tests are run.
Certified Tester
Foundation Level Syllabus
Version 2007 Page 16 of 76 12-Apr-2007
© International Software Testing Qualifications Board
Test implementation and execution has the following major tasks:
o Developing, implementing and prioritizing test cases.
o Developing and prioritizing test procedures, creating test data and, optionally, preparing test
harnesses and writing automated test scripts.
o Creating test suites from the test procedures for efficient test execution.
o Verifying that the test environment has been set up correctly.
o Executing test procedures either manually or by using test execution tools, according to the
planned sequence.
o Logging the outcome of test execution and recording the identities and versions of the software
under test, test tools and testware.
o Comparing actual results with expected results.
o Reporting discrepancies as incidents and analyzing them in order to establish their cause (e.g.
a defect in the code, in specified test data, in the test document, or a mistake in the way the test
was executed).
o Repeating test activities as a result of action taken for each discrepancy. For example, reexecution
of a test that previously failed in order to confirm a fix (confirmation testing), execution
of a corrected test and/or execution of tests in order to ensure that defects have not been
introduced in unchanged areas of the software or that defect fixing did not uncover other
defects (regression testing).
1.4.4 Evaluating exit criteria and reporting (K1)
Evaluating exit criteria is the activity where test execution is assessed against the defined
objectives. This should be done for each test level.
Evaluating exit criteria has the following major tasks:
o Checking test logs against the exit criteria specified in test planning.
o Assessing if more tests are needed or if the exit criteria specified should be changed.
o Writing a test summary report for stakeholders.
1.4.5 Test closure activities (K1)
Test closure activities collect data from completed test activities to consolidate experience,
testware, facts and numbers. For example, when a software system is released, a test project is
completed (or cancelled), a milestone has been achieved, or a maintenance release has been
completed.
Test closure activities include the following major tasks:
o Checking which planned deliverables have been delivered, the closure of incident reports or
raising of change records for any that remain open, and the documentation of the acceptance of
the system.
o Finalizing and archiving testware, the test environment and the test infrastructure for later
reuse.
o Handover of testware to the maintenance organization.
o Analyzing lessons learned for future releases and projects, and the improvement of test
maturity.
Certified Tester
Foundation Level Syllabus
Version 2007 Page 17 of 76 12-Apr-2007
© International Software Testing Qualifications Board
1.5 The psychology of testing (K2) 35 minutes
Terms
Error guessing, independence.
Background
The mindset to be used while testing and reviewing is different to that used while developing
software. With the right mindset developers are able to test their own code, but separation of this
responsibility to a tester is typically done to help focus effort and provide additional benefits, such as
an independent view by trained and professional testing resources. Independent testing may be
carried out at any level of testing.
A certain degree of independence (avoiding the author bias) is often more effective at finding
defects and failures. Independence is not, however, a replacement for familiarity, and developers
can efficiently find many defects in their own code. Several levels of independence can be defined:
o Tests designed by the person(s) who wrote the software under test (low level of independence).
o Tests designed by another person(s) (e.g. from the development team).
o Tests designed by a person(s) from a different organizational group (e.g. an independent test
team) or test specialists (e.g. usability or performance test specialists).
o Tests designed by a person(s) from a different organization or company (i.e. outsourcing or
certification by an external body).
People and projects are driven by objectives. People tend to align their plans with the objectives set
by management and other stakeholders, for example, to find defects or to confirm that software
works. Therefore, it is important to clearly state the objectives of testing.
Identifying failures during testing may be perceived as criticism against the product and against the
author. Testing is, therefore, often seen as a destructive activity, even though it is very constructive
in the management of product risks. Looking for failures in a system requires curiosity, professional
pessimism, a critical eye, attention to detail, good communication with development peers, and
experience on which to base error guessing.
If errors, defects or failures are communicated in a constructive way, bad feelings between the
testers and the analysts, designers and developers can be avoided. This applies to reviewing as
well as in testing.
The tester and test leader need good interpersonal skills to communicate factual information about
defects, progress and risks, in a constructive way. For the author of the software or document,
defect information can help them improve their skills. Defects found and fixed during testing will
save time and money later, and reduce risks.
Communication problems may occur, particularly if testers are seen only as messengers of
unwanted news about defects. However, there are several ways to improve communication and
relationships between testers and others:
Certified Tester
Foundation Level Syllabus
Version 2007 Page 18 of 76 12-Apr-2007
© International Software Testing Qualifications Board
o Start with collaboration rather than battles – remind everyone of the common goal of better
quality systems.
o Communicate findings on the product in a neutral, fact-focused way without criticizing the
person who created it, for example, write objective and factual incident reports and review
findings.
o Try to understand how the other person feels and why they react as they do.
o Confirm that the other person has understood what you have said and vice versa.
References
1.1.5 Black, 2001, Kaner, 2002
1.2 Beizer, 1990, Black, 2001, Myers, 1979
1.3 Beizer, 1990, Hetzel, 1988, Myers, 1979
1.4 Hetzel, 1988
1.4.5 Black, 2001, Craig, 2002
1.5 Black, 2001, Hetzel, 1988
Certified Tester
Foundation Level Syllabus
Version 2007 Page 19 of 76 12-Apr-2007
© International Software Testing Qualifications Board
2. Testing throughout the software life
cycle (K2)
115 minutes
Learning objectives for testing throughout the software life cycle
The objectives identify what you will be able to do following the completion of each module.
2.1 Software development models (K2)
LO-2.1.1 Understand the relationship between development, test activities and work products in
the development life cycle, and give examples based on project and product
characteristics and context (K2).
LO-2.1.2 Recognize the fact that software development models must be adapted to the context
of project and product characteristics. (K1)
LO-2.1.3 Recall reasons for different levels of testing, and characteristics of good testing in any
life cycle model. (K1)
2.2 Test levels (K2)
LO-2.2.1 Compare the different levels of testing: major objectives, typical objects of testing,
typical targets of testing (e.g. functional or structural) and related work products, people
who test, types of defects and failures to be identified. (K2)
2.3 Test types (K2)
LO-2.3.1 Compare four software test types (functional, non-functional, structural and changerelated)
by example. (K2)
LO-2.3.2 Recognize that functional and structural tests occur at any test level. (K1)
LO-2.3.3 Identify and describe non-functional test types based on non-functional requirements.
(K2)
LO-2.3.4 Identify and describe test types based on the analysis of a software system’s structure
or architecture. (K2)
LO-2.3.5 Describe the purpose of confirmation testing and regression testing. (K2)
2.4 Maintenance testing (K2)
LO-2.4.1 Compare maintenance testing (testing an existing system) to testing a new application
with respect to test types, triggers for testing and amount of testing. (K2)
LO-2.4.2 Identify reasons for maintenance testing (modification, migration and retirement). (K1)
LO-2.4.3. Describe the role of regression testing and impact analysis in maintenance. (K2)
Certified Tester
Foundation Level Syllabus
Version 2007 Page 20 of 76 12-Apr-2007
© International Software Testing Qualifications Board
2.1 Software development models (K2) 20 minutes
Terms
Commercial off-the-shelf (COTS), iterative-incremental development model, validation, verification,
V-model.
Background
Testing does not exist in isolation; test activities are related to software development activities.
Different development life cycle models need different approaches to testing.
2.1.1 V-model (sequential development model) (K2)
Although variants of the V-model exist, a common type of V-model uses four test levels,
corresponding to the four development levels.
The four levels used in this syllabus are:
o component (unit) testing;
o integration testing;
o system testing;
o acceptance testing.
In practice, a V-model may have more, fewer or different levels of development and testing,
depending on the project and the software product. For example, there may be component
integration testing after component testing, and system integration testing after system testing.
Software work products (such as business scenarios or use cases, requirements specifications,
design documents and code) produced during development are often the basis of testing in one or
more test levels. References for generic work products include Capability Maturity Model Integration
(CMMI) or ‘Software life cycle processes’ (IEEE/IEC 12207). Verification and validation (and early
test design) can be carried out during the development of the software work products.
2.1.2 Iterative-incremental development models (K2)
Iterative-incremental development is the process of establishing requirements, designing, building
and testing a system, done as a series of shorter development cycles. Examples are: prototyping,
rapid application development (RAD), Rational Unified Process (RUP) and agile development
models. The resulting system produced by an iteration may be tested at several levels as part of its
development. An increment, added to others developed previously, forms a growing partial system,
which should also be tested. Regression testing is increasingly important on all iterations after the
first one. Verification and validation can be carried out on each increment.
2.1.3 Testing within a life cycle model (K2)
In any life cycle model, there are several characteristics of good testing:
o For every development activity there is a corresponding testing activity.
o Each test level has test objectives specific to that level.
o The analysis and design of tests for a given test level should begin during the corresponding
development activity.
o Testers should be involved in reviewing documents as soon as drafts are available in the
development life cycle.
Certified Tester
Foundation Level Syllabus
Version 2007 Page 21 of 76 12-Apr-2007
© International Software Testing Qualifications Board
Test levels can be combined or reorganized depending on the nature of the project or the system
architecture. For example, for the integration of a commercial off-the-shelf (COTS) software product
into a system, the purchaser may perform integration testing at the system level (e.g. integration to
the infrastructure and other systems, or system deployment) and acceptance testing (functional
and/or non-functional, and user and/or operational testing).
Certified Tester
Foundation Level Syllabus
Version 2007 Page 22 of 76 12-Apr-2007
© International Software Testing Qualifications Board
2.2 Test levels (K2) 40 minutes
Terms
Alpha testing, beta testing, component testing (also known as unit, module or program testing),
driver, field testing, functional requirement, integration, integration testing, non-functional
requirement, robustness testing, stub, system testing, test level, test-driven development, test
environment, user acceptance testing.
Background
For each of the test levels, the following can be identified: their generic objectives, the work
product(s) being referenced for deriving test cases (i.e. the test basis), the test object (i.e. what is
being tested), typical defects and failures to be found, test harness requirements and tool support,
and specific approaches and responsibilities.
2.2.1 Component testing (K2)
Component testing searches for defects in, and verifies the functioning of, software (e.g. modules,
programs, objects, classes, etc.) that are separately testable. It may be done in isolation from the
rest of the system, depending on the context of the development life cycle and the system. Stubs,
drivers and simulators may be used.
Component testing may include testing of functionality and specific non-functional characteristics,
such as resource-behaviour (e.g. memory leaks) or robustness testing, as well as structural testing
(e.g. branch coverage). Test cases are derived from work products such as a specification of the
component, the software design or the data model.
Typically, component testing occurs with access to the code being tested and with the support of
the development environment, such as a unit test framework or debugging tool, and, in practice,
usually involves the programmer who wrote the code. Defects are typically fixed as soon as they
are found, without formally recording incidents.
One approach to component testing is to prepare and automate test cases before coding. This is
called a test-first approach or test-driven development. This approach is highly iterative and is
based on cycles of developing test cases, then building and integrating small pieces of code, and
executing the component tests until they pass.
2.2.2 Integration testing (K2)
Integration testing tests interfaces between components, interactions with different parts of a
system, such as the operating system, file system, hardware, or interfaces between systems.
There may be more than one level of integration testing and it may be carried out on test objects of
varying size. For example:
1. Component integration testing tests the interactions between software components and is done
after component testing;
2. System integration testing tests the interactions between different systems and may be done
after system testing. In this case, the developing organization may control only one side of the
interface, so changes may be destabilizing. Business processes implemented as workflows
may involve a series of systems. Cross-platform issues may be significant.
The greater the scope of integration, the more difficult it becomes to isolate failures to a specific
component or system, which may lead to increased risk.
Certified Tester
Foundation Level Syllabus
Version 2007 Page 23 of 76 12-Apr-2007
© International Software Testing Qualifications Board
Systematic integration strategies may be based on the system architecture (such as top-down and
bottom-up), functional tasks, transaction processing sequences, or some other aspect of the system
or component. In order to reduce the risk of late defect discovery, integration should normally be
incremental rather than “big bang”.
Testing of specific non-functional characteristics (e.g. performance) may be included in integration
testing.
At each stage of integration, testers concentrate solely on the integration itself. For example, if they
are integrating module A with module B they are interested in testing the communication between
the modules, not the functionality of either module. Both functional and structural approaches may
be used.
Ideally, testers should understand the architecture and influence integration planning. If integration
tests are planned before components or systems are built, they can be built in the order required for
most efficient testing.
2.2.3 System testing (K2)
System testing is concerned with the behaviour of a whole system/product as defined by the scope
of a development project or programme.
In system testing, the test environment should correspond to the final target or production
environment as much as possible in order to minimize the risk of environment-specific failures not
being found in testing.
System testing may include tests based on risks and/or on requirements specifications, business
processes, use cases, or other high level descriptions of system behaviour, interactions with the
operating system, and system resources.
System testing should investigate both functional and non-functional requirements of the system.
Requirements may exist as text and/or models. Testers also need to deal with incomplete or
undocumented requirements. System testing of functional requirements starts by using the most
appropriate specification-based (black-box) techniques for the aspect of the system to be tested.
For example, a decision table may be created for combinations of effects described in business
rules. Structure-based techniques (white-box) may then be used to assess the thoroughness of the
testing with respect to a structural element, such as menu structure or web page navigation. (See
Chapter 4.)
An independent test team often carries out system testing.
2.2.4 Acceptance testing (K2)
Acceptance testing is often the responsibility of the customers or users of a system; other
stakeholders may be involved as well.
The goal in acceptance testing is to establish confidence in the system, parts of the system or
specific non-functional characteristics of the system. Finding defects is not the main focus in
acceptance testing. Acceptance testing may assess the system’s readiness for deployment and
use, although it is not necessarily the final level of testing. For example, a large-scale system
integration test may come after the acceptance test for a system.
Certified Tester
Foundation Level Syllabus
Version 2007 Page 24 of 76 12-Apr-2007
© International Software Testing Qualifications Board
Acceptance testing may occur as more than just a single test level, for example:
o A COTS software product may be acceptance tested when it is installed or integrated.
o Acceptance testing of the usability of a component may be done during component testing.
o Acceptance testing of a new functional enhancement may come before system testing.
Typical forms of acceptance testing include the following:
User acceptance testing
Typically verifies the fitness for use of the system by business users.
Operational (acceptance) testing
The acceptance of the system by the system administrators, including:
o testing of backup/restore;
o disaster recovery;
o user management;
o maintenance tasks;
o periodic checks of security vulnerabilities.
Contract and regulation acceptance testing
Contract acceptance testing is performed against a contract’s acceptance criteria for producing
custom-developed software. Acceptance criteria should be defined when the contract is agreed.
Regulation acceptance testing is performed against any regulations that must be adhered to, such
as governmental, legal or safety regulations.
Alpha and beta (or field) testing
Developers of market, or COTS, software often want to get feedback from potential or existing
customers in their market before the software product is put up for sale commercially. Alpha testing
is performed at the developing organization’s site. Beta testing, or field testing, is performed by
people at their own locations. Both are performed by potential customers, not the developers of the
product.
Organizations may use other terms as well, such as factory acceptance testing and site acceptance
testing for systems that are tested before and after being moved to a customer’s site.
Certified Tester
Foundation Level Syllabus
Version 2007 Page 25 of 76 12-Apr-2007
© International Software Testing Qualifications Board
2.3 Test types (K2) 40 minutes
Terms
Black-box testing, code coverage, functional testing, interoperability testing, load testing,
maintainability testing, performance testing, portability testing, reliability testing, security testing,
specification-based testing, stress testing, structural testing, usability testing, white-box testing.
Background
A group of test activities can be aimed at verifying the software system (or a part of a system)
based on a specific reason or target for testing.
A test type is focused on a particular test objective, which could be the testing of a function to be
performed by the software; a non-functional quality characteristic, such as reliability or usability, the
structure or architecture of the software or system; or related to changes, i.e. confirming that defects
have been fixed (confirmation testing) and looking for unintended changes (regression testing).
A model of the software may be developed and/or used in structural and functional testing, for
example, in functional testing a process flow model, a state transition model or a plain language
specification; and for structural testing a control flow model or menu structure model.
2.3.1 Testing of function (functional testing) (K2)
The functions that a system, subsystem or component are to perform may be described in work
products such as a requirements specification, use cases, or a functional specification, or they may
be undocumented. The functions are “what” the system does.
Functional tests are based on functions and features (described in documents or understood by the
testers) and their interoperability with specific systems, and may be performed at all test levels (e.g.
tests for components may be based on a component specification).
Specification-based techniques may be used to derive test conditions and test cases from the
functionality of the software or system. (See Chapter 4.) Functional testing considers the external
behaviour of the software (black-box testing).
A type of functional testing, security testing, investigates the functions (e.g. a firewall) relating to
detection of threats, such as viruses, from malicious outsiders. Another type of functional testing,
interoperability testing, evaluates the capability of the software product to interact with one or more
specified components or systems.
2.3.2 Testing of non-functional software characteristics (non-functional
testing) (K2)
Non-functional testing includes, but is not limited to, performance testing, load testing, stress
testing, usability testing, maintainability testing, reliability testing and portability testing. It is the
testing of “how” the system works.
Non-functional testing may be performed at all test levels. The term non-functional testing describes
the tests required to measure characteristics of systems and software that can be quantified on a
varying scale, such as response times for performance testing. These tests can be referenced to a
quality model such as the one defined in ‘Software Engineering – Software Product Quality’ (ISO
9126).
Certified Tester
Foundation Level Syllabus
Version 2007 Page 26 of 76 12-Apr-2007
© International Software Testing Qualifications Board
2.3.3 Testing of software structure/architecture (structural testing) (K2)
Structural (white-box) testing may be performed at all test levels. Structural techniques are best
used after specification-based techniques, in order to help measure the thoroughness of testing
through assessment of coverage of a type of structure.
Coverage is the extent that a structure has been exercised by a test suite, expressed as a
percentage of the items being covered. If coverage is not 100%, then more tests may be designed
to test those items that were missed and, therefore, increase coverage. Coverage techniques are
covered in Chapter 4.
At all test levels, but especially in component testing and component integration testing, tools can
be used to measure the code coverage of elements, such as statements or decisions. Structural
testing may be based on the architecture of the system, such as a calling hierarchy.
Structural testing approaches can also be applied at system, system integration or acceptance
testing levels (e.g. to business models or menu structures).
2.3.4 Testing related to changes (confirmation testing (retesting) and
regression testing) (K2)
After a defect is detected and fixed, the software should be retested to confirm that the original
defect has been successfully removed. This is called confirmation. Debugging (defect fixing) is a
development activity, not a testing activity.
Regression testing is the repeated testing of an already tested program, after modification, to
discover any defects introduced or uncovered as a result of the change(s). These defects may be
either in the software being tested, or in another related or unrelated software component. It is
performed when the software, or its environment, is changed. The extent of regression testing is
based on the risk of not finding defects in software that was working previously.
Tests should be repeatable if they are to be used for confirmation testing and to assist regression
testing.
Regression testing may be performed at all test levels, and applies to functional, non-functional and
structural testing. Regression test suites are run many times and generally evolve slowly, so
regression testing is a strong candidate for automation.
Certified Tester
Foundation Level Syllabus
Version 2007 Page 27 of 76 12-Apr-2007
© International Software Testing Qualifications Board
2.4 Maintenance testing (K2) 15 minutes
Terms
Impact analysis, maintenance testing.
Background
Once deployed, a software system is often in service for years or decades. During this time the
system and its environment are often corrected, changed or extended. Maintenance testing is done
on an existing operational system, and is triggered by modifications, migration, or retirement of the
software or system.
Modifications include planned enhancement changes (e.g. release-based), corrective and
emergency changes, and changes of environment, such as planned operating system or database
upgrades, or patches to newly exposed or discovered vulnerabilities of the operating system.
Maintenance testing for migration (e.g. from one platform to another) should include operational
tests of the new environment, as well as of the changed software.
Maintenance testing for the retirement of a system may include the testing of data migration or
archiving if long data-retention periods are required.
In addition to testing what has been changed, maintenance testing includes extensive regression
testing to parts of the system that have not been changed. The scope of maintenance testing is
related to the risk of the change, the size of the existing system and to the size of the change.
Depending on the changes, maintenance testing may be done at any or all test levels and for any or
all test types.
Determining how the existing system may be affected by changes is called impact analysis, and is
used to help decide how much regression testing to do.
Maintenance testing can be difficult if specifications are out of date or missing.
References
2.1.3 CMMI, Craig, 2002, Hetzel, 1988, IEEE 12207
2.2 Hetzel, 1988
2.2.4 Copeland, 2004, Myers, 1979
2.3.1 Beizer, 1990, Black, 2001, Copeland, 2004
2.3.2 Black, 2001, ISO 9126
2.3.3 Beizer, 1990, Copeland, 2004, Hetzel, 1988
2.3.4 Hetzel, 1988, IEEE 829
2.4 Black, 2001, Craig, 2002, Hetzel, 1988, IEEE 829
Certified Tester
Foundation Level Syllabus
Version 2007 Page 28 of 76 12-Apr-2007
© International Software Testing Qualifications Board
3. Static techniques (K2) 60 minutes
Learning objectives for static techniques
The objectives identify what you will be able to do following the completion of each module.
3.1 Static techniques and the test process (K2)
LO-3.1.1 Recognize software work products that can be examined by the different static
techniques. (K1)
LO-3.1.2 Describe the importance and value of considering static techniques for the assessment
of software work products. (K2)
LO-3.1.3 Explain the difference between static and dynamic techniques. (K2)
LO-3.1.4 Describe the objectives of static analysis and reviews and compare them to dynamic
testing. (K2)
3.2 Review process (K2)
LO-3.2.1 Recall the phases, roles and responsibilities of a typical formal review. (K1)
LO-3.2.2 Explain the differences between different types of review: informal review, technical
review, walkthrough and inspection. (K2)
LO-3.2.3 Explain the factors for successful performance of reviews. (K2)
3.3 Static analysis by tools (K2)
LO-3.3.1 Recall typical defects and errors identified by static analysis and compare them to
reviews and dynamic testing. (K1)
LO-3.3.2 List typical benefits of static analysis. (K1)
LO-3.3.3 List typical code and design defects that may be identified by static analysis tools. (K1)
Certified Tester
Foundation Level Syllabus
Version 2007 Page 29 of 76 12-Apr-2007
© International Software Testing Qualifications Board
3.1 Static techniques and the test process (K2) 15 minutes
Terms
Dynamic testing, static testing, static technique.
Background
Unlike dynamic testing, which requires the execution of software, static testing techniques rely on
the manual examination (reviews) and automated analysis (static analysis) of the code or other
project documentation.
Reviews are a way of testing software work products (including code) and can be performed well
before dynamic test execution. Defects detected during reviews early in the life cycle are often
much cheaper to remove than those detected while running tests (e.g. defects found in
requirements).
A review could be done entirely as a manual activity, but there is also tool support. The main
manual activity is to examine a work product and make comments about it. Any software work
product can be reviewed, including requirements specifications, design specifications, code, test
plans, test specifications, test cases, test scripts, user guides or web pages.
Benefits of reviews include early defect detection and correction, development productivity
improvements, reduced development timescales, reduced testing cost and time, lifetime cost
reductions, fewer defects and improved communication. Reviews can find omissions, for example,
in requirements, which are unlikely to be found in dynamic testing.
Reviews, static analysis and dynamic testing have the same objective – identifying defects. They
are complementary: the different techniques can find different types of defects effectively and
efficiently. Compared to dynamic testing, static techniques find causes of failures (defects) rather
than the failures themselves.
Typical defects that are easier to find in reviews than in dynamic testing are: deviations from
standards, requirement defects, design defects, insufficient maintainability and incorrect interface
specifications.
Certified Tester
Foundation Level Syllabus
Version 2007 Page 30 of 76 12-Apr-2007
© International Software Testing Qualifications Board
3.2 Review process (K2) 25 minutes
Terms
Entry criteria, formal review, informal review, inspection, metric, moderator/inspection leader, peer
review, reviewer, scribe, technical review, walkthrough.
Background
The different types of reviews vary from very informal (e.g. no written instructions for reviewers) to
very formal (i.e. well structured and regulated). The formality of a review process is related to
factors such as the maturity of the development process, any legal or regulatory requirements or the
need for an audit trail.
The way a review is carried out depends on the agreed objective of the review (e.g. find defects,
gain understanding, or discussion and decision by consensus).
3.2.1 Phases of a formal review (K1)
A typical formal review has the following main phases:
1. Planning: selecting the personnel, allocating roles; defining the entry and exit criteria for more
formal review types (e.g. inspection); and selecting which parts of documents to look at.
2. Kick-off: distributing documents; explaining the objectives, process and documents to the
participants; and checking entry criteria (for more formal review types).
3. Individual preparation: work done by each of the participants on their own before the review
meeting, noting potential defects, questions and comments.
4. Review meeting: discussion or logging, with documented results or minutes (for more formal
review types). The meeting participants may simply note defects, make recommendations for
handling the defects, or make decisions about the defects.
5. Rework: fixing defects found, typically done by the author.
6. Follow-up: checking that defects have been addressed, gathering metrics and checking on exit
criteria (for more formal review types).
3.2.2 Roles and responsibilities (K1)
A typical formal review will include the roles below:
o Manager: decides on the execution of reviews, allocates time in project schedules and
determines if the review objectives have been met.
o Moderator: the person who leads the review of the document or set of documents, including
planning the review, running the meeting, and follow-up after the meeting. If necessary, the
moderator may mediate between the various points of view and is often the person upon whom
the success of the review rests.
o Author: the writer or person with chief responsibility for the document(s) to be reviewed.
o Reviewers: individuals with a specific technical or business background (also called checkers or
inspectors) who, after the necessary preparation, identify and describe findings (e.g. defects) in
the product under review. Reviewers should be chosen to represent different perspectives and
roles in the review process, and should take part in any review meetings.
o Scribe (or recorder): documents all the issues, problems and open points that were identified
during the meeting.
Looking at documents from different perspectives and using checklists can make reviews more
effective and efficient, for example, a checklist based on perspectives such as user, maintainer,
tester or operations, or a checklist of typical requirements problems.
Certified Tester
Foundation Level Syllabus
Version 2007 Page 31 of 76 12-Apr-2007
© International Software Testing Qualifications Board
3.2.3 Types of review (K2)
A single document may be the subject of more than one review. If more than one type of review is
used, the order may vary. For example, an informal review may be carried out before a technical
review, or an inspection may be carried out on a requirements specification before a walkthrough
with customers. The main characteristics, options and purposes of common review types are:
Informal review
Key characteristics:
o no formal process;
o there may be pair programming or a technical lead reviewing designs and code;
o optionally may be documented;
o may vary in usefulness depending on the reviewer;
o main purpose: inexpensive way to get some benefit.
Walkthrough
Key characteristics:
o meeting led by author;
o scenarios, dry runs, peer group;
o open-ended sessions;
o optionally a pre-meeting preparation of reviewers, review report, list of findings and scribe (who
is not the author);
o may vary in practice from quite informal to very formal;
o main purposes: learning, gaining understanding, defect finding.
Technical review
Key characteristics:
o documented, defined defect-detection process that includes peers and technical experts;
o may be performed as a peer review without management participation;
o ideally led by trained moderator (not the author);
o pre-meeting preparation;
o optionally the use of checklists, review report, list of findings and management participation;
o may vary in practice from quite informal to very formal;
o main purposes: discuss, make decisions, evaluate alternatives, find defects, solve technical
problems and check conformance to specifications and standards.
Inspection
Key characteristics:
o led by trained moderator (not the author);
o usually peer examination;
o defined roles;
o includes metrics;
o formal process based on rules and checklists with entry and exit criteria;
o pre-meeting preparation;
o inspection report, list of findings;
o formal follow-up process;
o optionally, process improvement and reader;
o main purpose: find defects.
Walkthroughs, technical reviews and inspections can be performed within a peer group –
colleagues at the same organizational level. This type of review is called a “peer review”.
Certified Tester
Foundation Level Syllabus
Version 2007 Page 32 of 76 12-Apr-2007
© International Software Testing Qualifications Board
3.2.4 Success factors for reviews (K2)
Success factors for reviews include:
o Each review has a clear predefined objective.
o The right people for the review objectives are involved.
o Defects found are welcomed, and expressed objectively.
o People issues and psychological aspects are dealt with (e.g. making it a positive experience for
the author).
o Review techniques are applied that are suitable to the type and level of software work products
and reviewers.
o Checklists or roles are used if appropriate to increase effectiveness of defect identification.
o Training is given in review techniques, especially the more formal techniques, such as
inspection.
o Management supports a good review process (e.g. by incorporating adequate time for review
activities in project schedules).
o There is an emphasis on learning and process improvement.
Certified Tester
Foundation Level Syllabus
Version 2007 Page 33 of 76 12-Apr-2007
© International Software Testing Qualifications Board
3.3 Static analysis by tools (K2) 20 minutes
Terms
Compiler, complexity, control flow, data flow, static analysis.
Background
The objective of static analysis is to find defects in software source code and software models.
Static analysis is performed without actually executing the software being examined by the tool;
dynamic testing does execute the software code. Static analysis can locate defects that are hard to
find in testing. As with reviews, static analysis finds defects rather than failures. Static analysis tools
analyze program code (e.g. control flow and data flow), as well as generated output such as HTML
and XML.
The value of static analysis is:
o Early detection of defects prior to test execution.
o Early warning about suspicious aspects of the code or design, by the calculation of metrics,
such as a high complexity measure.
o Identification of defects not easily found by dynamic testing.
o Detecting dependencies and inconsistencies in software models, such as links.
o Improved maintainability of code and design.
o Prevention of defects, if lessons are learned in development.
Typical defects discovered by static analysis tools include:
o referencing a variable with an undefined value;
o inconsistent interface between modules and components;
o variables that are never used;
o unreachable (dead) code;
o programming standards violations;
o security vulnerabilities;
o syntax violations of code and software models.
Static analysis tools are typically used by developers (checking against predefined rules or
programming standards) before and during component and integration testing, and by designers
during software modeling. Static analysis tools may produce a large number of warning messages,
which need to be well managed to allow the most effective use of the tool.
Compilers may offer some support for static analysis, including the calculation of metrics.
References
3.2 IEEE 1028
3.2.2 Gilb, 1993, van Veenendaal, 2004
3.2.4 Gilb, 1993, IEEE 1028
3.3 Van Veenendaal, 2004
Certified Tester
Foundation Level Syllabus
Version 2007 Page 34 of 76 12-Apr-2007
© International Software Testing Qualifications Board
4. Test design techniques (K3) 285 minutes
Learning objectives for test design techniques
The objectives identify what you will be able to do following the completion of each module.
4.1 The test development process (K2)
LO-4.1.1 Differentiate between a test design specification, test case specification and test
procedure specification. (K2)
LO-4.1.2 Compare the terms test condition, test case and test procedure. (K2)
LO-4.1.3 Evaluate the quality of test cases. Do they:
o show clear traceability to the requirements;
o contain an expected result. (K2)
LO-4.1.4 Translate test cases into a well-structured test procedure specification at a level of
detail relevant to the knowledge of the testers. (K3)
4.2 Categories of test design techniques (K2)
LO-4.2.1 Recall reasons that both specification-based (black-box) and structure-based (whitebox)
approaches to test case design are useful, and list the common techniques for
each. (K1)
LO-4.2.2 Explain the characteristics and differences between specification-based testing,
structure-based testing and experience-based testing. (K2)
4.3 Specification-based or black-box techniques (K3)
LO-4.3.1 Write test cases from given software models using the following test design techniques:
(K3)
o equivalence partitioning;
o boundary value analysis;
o decision table testing;
o state transition testing.
LO-4.3.2 Understand the main purpose of each of the four techniques, what level and type of
testing could use the technique, and how coverage may be measured. (K2)
LO-4.3.3 Understand the concept of use case testing and its benefits. (K2)
4.4 Structure-based or white-box techniques (K3)
LO-4.4.1 Describe the concept and importance of code coverage. (K2)
LO-4.4.2 Explain the concepts of statement and decision coverage, and understand that these
concepts can also be used at other test levelsthan component testing (e.g. on business
procedures at system level). (K2)
LO-4.4.3 Write test cases from given control flows using the following test design techniques:
o statement testing;
o decision testing. (K3)
LO-4.4.4 Assess statement and decision coverage for completeness. (K3)
4.5 Experience-based techniques (K2)
Certified Tester
Foundation Level Syllabus
Version 2007 Page 35 of 76 12-Apr-2007
© International Software Testing Qualifications Board
LO-4.5.1 Recall reasons for writing test cases based on intuition, experience and knowledge
about common defects. (K1)
LO-4.5.2 Compare experience-based techniques with specification-based testing techniques.
(K2)
4.6 Choosing test techniques (K2)
LO-4.6.1 List the factors that influence the selection of the appropriate test design technique for a
particular kind of problem, such as the type of system, risk, customer requirements,
models for use case modeling, requirements models or tester knowledge. (K2)
Certified Tester
Foundation Level Syllabus
Version 2007 Page 36 of 76 12-Apr-2007
© International Software Testing Qualifications Board
4.1 The TEST DEVELOPMENT PROCESS (K2) 15 minutes
Terms
Test case specification, test design, test execution schedule, test procedure specification, test
script, traceability.
Background
The process described in this section can be done in different ways, from very informal with little or
no documentation, to very formal (as it is described below). The level of formality depends on the
context of the testing, including the organization, the maturity of testing and development
processes, time constraints and the people involved.
During test analysis, the test basis documentation is analyzed in order to determine what to test, i.e.
to identify the test conditions. A test condition is defined as an item or event that could be verified by
one or more test cases (e.g. a function, transaction, quality characteristic or structural element).
Establishing traceability from test conditions back to the specifications and requirements enables
both impact analysis, when requirements change, and requirements coverage to be determined for
a set of tests. During test analysis the detailed test approach is implemented to select the test
design techniques to use, based on, among other considerations, the risks identified (see Chapter 5
for more on risk analysis).
During test design the test cases and test data are created and specified. A test case consists of a
set of input values, execution preconditions, expected results and execution post-conditions,
developed to cover certain test condition(s). The ‘Standard for Software Test Documentation’ (IEEE
829) describes the content of test design specifications (containing test conditions) and test case
specifications.
Expected results should be produced as part of the specification of a test case and include outputs,
changes to data and states, and any other consequences of the test. If expected results have not
been defined then a plausible, but erroneous, result may be interpreted as the correct one.
Expected results should ideally be defined prior to test execution.
During test implementation the test cases are developed, implemented, prioritized and organized in
the test procedure specification. The test procedure (or manual test script) specifies the sequence
of action for the execution of a test. If tests are run using a test execution tool, the sequence of
actions is specified in a test script (which is an automated test procedure).
The various test procedures and automated test scripts are subsequently formed into a test
execution schedule that defines the order in which the various test procedures, and possibly
automated test scripts, are executed, when they are to be carried out and by whom. The test
execution schedule will take into account such factors as regression tests, prioritization, and
technical and logical dependencies.
Certified Tester
Foundation Level Syllabus
Version 2007 Page 37 of 76 12-Apr-2007
© International Software Testing Qualifications Board
4.2 Categories of test design techniques (K2) 15 minutes
Terms
Black-box test design technique, experience-based test design technique, specification-based test
design technique, structure-based test design technique, white-box test design technique.
Background
The purpose of a test design technique is to identify test conditions and test cases.
It is a classic distinction to denote test techniques as black box or white box. Black-box techniques
(which include specification-based and experienced-based techniques) are a way to derive and
select test conditions or test cases based on an analysis of the test basis documentation and the
experience of developers, testers and users, whether functional or non-functional, for a component
or system without reference to its internal structure. White-box techniques (also called structural or
structure-based techniques) are based on an analysis of the structure of the component or system.
Some techniques fall clearly into a single category; others have elements of more than one
category.
This syllabus refers to specification-based or experience-based approaches as black-box
techniques and structure-based as white-box techniques.
Common features of specification-based techniques:
o Models, either formal or informal, are used for the specification of the problem to be solved, the
software or its components.
o From these models test cases can be derived systematically.
Common features of structure-based techniques:
o Information about how the software is constructed is used to derive the test cases, for example,
code and design.
o The extent of coverage of the software can be measured for existing test cases, and further test
cases can be derived systematically to increase coverage.
Common features of experience-based techniques:
o The knowledge and experience of people are used to derive the test cases.
o Knowledge of testers, developers, users and other stakeholders about the software, its
usage and its environment.
o Knowledge about likely defects and their distribution.
Certified Tester
Foundation Level Syllabus
Version 2007 Page 38 of 76 12-Apr-2007
© International Software Testing Qualifications Board
4.3 Specification-based or black-box techniques
(K3)
150 minutes
Terms
Boundary value analysis, decision table testing, equivalence partitioning, state transition testing,
use case testing.
4.3.1 Equivalence partitioning (K3)
Inputs to the software or system are divided into groups that are expected to exhibit similar
behaviour, so they are likely to be processed in the same way. Equivalence partitions (or classes)
can be found for both valid data and invalid data, i.e. values that should be rejected. Partitions can
also be identified for outputs, internal values, time-related values (e.g. before or after an event) and
for interface parameters (e.g. during integration testing). Tests can be designed to cover partitions.
Equivalence partitioning is applicable at all levels of testing.
Equivalence partitioning as a technique can be used to achieve input and output coverage. It can be
applied to human input, input via interfaces to a system, or interface parameters in integration
testing.
4.3.2 Boundary value analysis (K3)
Behaviour at the edge of each equivalence partition is more likely to be incorrect, so boundaries are
an area where testing is likely to yield defects. The maximum and minimum values of a partition are
its boundary values. A boundary value for a valid partition is a valid boundary value; the boundary of
an invalid partition is an invalid boundary value. Tests can be designed to cover both valid and
invalid boundary values. When designing test cases, a test for each boundary value is chosen.
Boundary value analysis can be applied at all test levels. It is relatively easy to apply and its defectfinding
capability is high; detailed specifications are helpful.
This technique is often considered as an extension of equivalence partitioning. It can be used on
equivalence classes for user input on screen as well as, for example, on time ranges (e.g. time out,
transactional speed requirements) or table ranges (e.g. table size is 256*256). Boundary values
may also be used for test data selection.
4.3.3 Decision table testing (K3)
Decision tables are a good way to capture system requirements that contain logical conditions, and
to document internal system design. They may be used to record complex business rules that a
system is to implement. The specification is analyzed, and conditions and actions of the system are
identified. The input conditions and actions are most often stated in such a way that they can either
be true or false (Boolean). The decision table contains the triggering conditions, often combinations
of true and false for all input conditions, and the resulting actions for each combination of
conditions. Each column of the table corresponds to a business rule that defines a unique
combination of conditions, which result in the execution of the actions associated with that rule. The
coverage standard commonly used with decision table testing is to have at least one test per
column, which typically involves covering all combinations of triggering conditions.
The strength of decision table testing is that it creates combinations of conditions that might not
otherwise have been exercised during testing. It may be applied to all situations when the action of
the software depends on several logical decisions.
Certified Tester
Foundation Level Syllabus
Version 2007 Page 39 of 76 12-Apr-2007
© International Software Testing Qualifications Board
4.3.4 State transition testing (K3)
A system may exhibit a different response depending on current conditions or previous history (its
state). In this case, that aspect of the system can be shown as a state transition diagram. It allows
the tester to view the software in terms of its states, transitions between states, the inputs or events
that trigger state changes (transitions) and the actions which may result from those transitions. The
states of the system or object under test are separate, identifiable and finite in number. A state table
shows the relationship between the states and inputs, and can highlight possible transitions that are
invalid. Tests can be designed to cover a typical sequence of states, to cover every state, to
exercise every transition, to exercise specific sequences of transitions or to test invalid transitions.
State transition testing is much used within the embedded software industry and technical
automation in general. However, the technique is also suitable for modeling a business object
having specific states or testing screen-dialogue flows (e.g. for internet applications or business
scenarios).
4.3.5 Use case testing (K2)
Tests can be specified from use cases or business scenarios. A use case describes interactions
between actors, including users and the system, which produce a result of value to a system user.
Each use case has preconditions, which need to be met for a use case to work successfully. Each
use case terminates with post-conditions, which are the observable results and final state of the
system after the use case has been completed. A use case usually has a mainstream (i.e. most
likely) scenario, and sometimes alternative branches.
Use cases describe the “process flows” through a system based on its actual likely use, so the test
cases derived from use cases are most useful in uncovering defects in the process flows during
real-world use of the system. Use cases, often referred to as scenarios, are very useful for
designing acceptance tests with customer/user participation. They also help uncover integration
defects caused by the interaction and interference of different components, which individual
component testing would not see.
Certified Tester
Foundation Level Syllabus
Version 2007 Page 40 of 76 12-Apr-2007
© International Software Testing Qualifications Board
4.4 Structure-based or white-box techniques
(K3)
60 minutes
Terms
Code coverage, decision coverage, statement coverage, structure-based testing.
Background
Structure-based testing/white-box testing is based on an identified structure of the software or
system, as seen in the following examples:
o Component level: the structure is that of the code itself, i.e. statements, decisions or branches.
o Integration level: the structure may be a call tree (a diagram in which modules call other
modules).
o System level: the structure may be a menu structure, business process or web page structure.
In this section, two code-related structural techniques for code coverage, based on statements and
decisions, are discussed. For decision testing, a control flow diagram may be used to visualize the
alternatives for each decision.
4.4.1 Statement testing and coverage (K3)
In component testing, statement coverage is the assessment of the percentage of executable
statements that have been exercised by a test case suite. Statement testing derives test cases to
execute specific statements, normally to increase statement coverage.
4.4.2 Decision testing and coverage (K3)
Decision coverage, related to branch testing, is the assessment of the percentage of decision
outcomes (e.g. the True and False options of an IF statement) that have been exercised by a test
case suite. Decision testing derives test cases to execute specific decision outcomes, normally to
increase decision coverage.
Decision testing is a form of control flow testing as it generates a specific flow of control through the
decision points. Decision coverage is stronger than statement coverage: 100% decision coverage
guarantees 100% statement coverage, but not vice versa.
4.4.3 Other structure-based techniques (K1)
There are stronger levels of structural coverage beyond decision coverage, for example, condition
coverage and multiple condition coverage.
The concept of coverage can also be applied at other test levels (e.g. at integration level) where the
percentage of modules, components or classes that have been exercised by a test case suite could
be expressed as module, component or class coverage.
Tool support is useful for the structural testing of code.
Certified Tester
Foundation Level Syllabus
Version 2007 Page 41 of 76 12-Apr-2007
© International Software Testing Qualifications Board
4.5 Experience-based techniques (K2) 30 minutes
Terms
Exploratory testing, fault attack.
Background
Experienced-based testing is where tests are derived from the tester’s skill and intuition and their
experience with similar applications and technologies. When used to augment systematic
techniques, these techniques can be useful in identifying special tests not easily captured by formal
techniques, especially when applied after more formal approaches. However, this technique may
yield widely varying degrees of effectiveness, depending on the testers’ experience.
A commonly used experienced-based technique is error guessing. Generally testers anticipate
defects based on experience. A structured approach to the error guessing technique is to
enumerate a list of possible errors and to design tests that attack these errors. This systematic
approach is called fault attack. These defect and failure lists can be built based on experience,
available defect and failure data, and from common knowledge about why software fails.
Exploratory testing is concurrent test design, test execution, test logging and learning, based on a
test charter containing test objectives, and carried out within time-boxes. It is an approach that is
most useful where there are few or inadequate specifications and severe time pressure, or in order
to augment or complement other, more formal testing. It can serve as a check on the test process,
to help ensure that the most serious defects are found.
Certified Tester
Foundation Level Syllabus
Version 2007 Page 42 of 76 12-Apr-2007
© International Software Testing Qualifications Board
4.6 Choosing test techniques (K2) 15 minutes
Terms
No specific terms.
Background
The choice of which test techniques to use depends on a number of factors, including the type of
system, regulatory standards, customer or contractual requirements, level of risk, type of risk, test
objective, documentation available, knowledge of the testers, time and budget, development life
cycle, use case models and previous experience of types of defects found.
Some techniques are more applicable to certain situations and test levels; others are applicable to
all test levels.
References
4.1 Craig, 2002, Hetzel, 1988, IEEE 829
4.2 Beizer, 1990, Copeland, 2004
4.3.1 Copeland, 2004, Myers, 1979
4.3.2 Copeland, 2004, Myers, 1979
4.3.3 Beizer, 1990, Copeland, 2004
4.3.4 Beizer, 1990, Copeland, 2004
4.3.5 Copeland, 2004
4.4.3 Beizer, 1990, Copeland, 2004
4.5 Kaner, 2002
4.6 Beizer, 1990, Copeland, 2004
Certified Tester
Foundation Level Syllabus
Version 2007 Page 43 of 76 12-Apr-2007
© International Software Testing Qualifications Board
5. Test management (K3) 170 minutes
Learning objectives for test management
The objectives identify what you will be able to do following the completion of each module.
5.1 Test organization (K2)
LO-5.1.1 Recognize the importance of independent testing. (K1)
LO-5.1.2 List the benefits and drawbacks of independent testing within an organization. (K2)
LO-5.1.3 Recognize the different team members to be considered for the creation of a test team.
(K1)
LO-5.1.4 Recall the tasks of typical test leader and tester. (K1)
5.2 Test planning and estimation (K2)
LO-5.2.1 Recognize the different levels and objectives of test planning. (K1)
LO-5.2.2 Summarize the purpose and content of the test plan, test design specification and test
procedure documents according to the ‘Standard for Software Test Documentation’
(IEEE 829). (K2)
LO-5.2.3 Differentiate between conceptually different test approaches, such as analytical, modelbased,
methodical, process/standard compliant, dynamic/heuristic, consultative and
regression averse. (K2)
LO-5.2.4 Differentiate between the subject of test planning for a system and for scheduling test
execution. (K2)
LO-5.2.5 Write a test execution schedule for a given set of test cases, considering prioritization,
and technical and logical dependencies. (K3)
LO-5.2.6 List test preparation and execution activities that should be considered during test
planning. (K1)
LO-5.2.7 Recall typical factors that influence the effort related to testing. (K1)
LO-5.2.8 Differentiate between two conceptually different estimation approaches: the metricsbased
approach and the expert-based approach. (K2)
LO-5.2.9 Recognize/justify adequate exit criteria for specific test levels and groups of test cases
(e.g. for integration testing, acceptance testing or test cases for usability testing). (K2)
5.3 Test progress monitoring and control (K2)
LO-5.3.1 Recall common metrics used for monitoring test preparation and execution. (K1)
LO-5.3.2 Understand and interpret test metrics for test reporting and test control (e.g. defects
found and fixed, and tests passed and failed). (K2)
LO-5.3.3 Summarize the purpose and content of the test summary report document according to
the ‘Standard for Software Test Documentation’ (IEEE 829). (K2)
5.4 Configuration management (K2)
LO-5.4.1 Summarize how configuration management supports testing. (K2)
5.5 Risk and testing (K2)
LO-5.5.1 Describe a risk as a possible problem that would threaten the achievement of one or
more stakeholders’ project objectives. (K2)
LO-5.5.2 Remember that risks are determined by likelihood (of happening) and impact (harm
resulting if it does happen). (K1)
LO-5.5.3 Distinguish between the project and product risks. (K2)
LO-5.5.4 Recognize typical product and project risks. (K1)
LO-5.5.5 Describe, using examples, how risk analysis and risk management may be used for test
planning. (K2)
Certified Tester
Foundation Level Syllabus
Version 2007 Page 44 of 76 12-Apr-2007
© International Software Testing Qualifications Board
5.6 Incident Management (K3)
LO-5.6.1 Recognize the content of an incident report according to the ‘Standard for Software
Test Documentation’ (IEEE 829). (K1)
LO-5.6.2 Write an incident report covering the observation of a failure during testing. (K3)
Certified Tester
Foundation Level Syllabus
Version 2007 Page 45 of 76 12-Apr-2007
© International Software Testing Qualifications Board
5.1 Test organization (K2) 30 minutes
Terms
Tester, test leader, test manager.
5.1.1 Test organization and independence (K2)
The effectiveness of finding defects by testing and reviews can be improved by using independent
testers. Options for independence are:
o No independent testers. Developers test their own code.
o Independent testers within the development teams.
o Independent test team or group within the organization, reporting to project management or
executive management.
o Independent testers from the business organization or user community.
o Independent test specialists for specific test targets such as usability testers, security testers or
certification testers (who certify a software product against standards and regulations).
o Independent testers outsourced or external to the organization.
For large, complex or safety critical projects, it is usually best to have multiple levels of testing, with
some or all of the levels done by independent testers. Development staff may participate in testing,
especially at the lower levels, but their lack of objectivity often limits their effectiveness. The
independent testers may have the authority to require and define test processes and rules, but
testers should take on such process-related roles only in the presence of a clear management
mandate to do so.
The benefits of independence include:
o Independent testers see other and different defects, and are unbiased.
o An independent tester can verify assumptions people made during specification and
implementation of the system.
Drawbacks include:
o Isolation from the development team (if treated as totally independent).
o Independent testers may be the bottleneck as the last checkpoint.
o Developers may lose a sense of responsibility for quality.
Testing tasks may be done by people in a specific testing role, or may be done by someone in
another role, such as a project manager, quality manager, developer, business and domain expert,
infrastructure or IT operations.
5.1.2 Tasks of the test leader and tester (K1)
In this syllabus two test positions are covered, test leader and tester. The activities and tasks
performed by people in these two roles depend on the project and product context, the people in the
roles, and the organization.
Sometimes the test leader is called a test manager or test coordinator. The role of the test leader
may be performed by a project manager, a development manager, a quality assurance manager or
the manager of a test group. In larger projects two positions may exist: test leader and test
manager. Typically the test leader plans, monitors and controls the testing activities and tasks as
defined in Section 1.4.
Certified Tester
Foundation Level Syllabus
Version 2007 Page 46 of 76 12-Apr-2007
© International Software Testing Qualifications Board
Typical test leader tasks may include:
o Coordinate the test strategy and plan with project managers and others.
o Write or review a test strategy for the project, and test policy for the organization.
o Contribute the testing perspective to other project activities, such as integration planning.
o Plan the tests – considering the context and understanding the test objectives and risks –
including selecting test approaches, estimating the time, effort and cost of testing, acquiring
resources, defining test levels, cycles, and planning incident management.
o Initiate the specification, preparation, implementation and execution of tests, monitor the test
results and check the exit criteria.
o Adapt planning based on test results and progress (sometimes documented in status reports)
and take any action necessary to compensate for problems.
o Set up adequate configuration management of testware for traceability.
o Introduce suitable metrics for measuring test progress and evaluating the quality of the testing
and the product.
o Decide what should be automated, to what degree, and how.
o Select tools to support testing and organize any training in tool use for testers.
o Decide about the implementation of the test environment.
o Write test summary reports based on the information gathered during testing.
Typical tester tasks may include:
o Review and contribute to test plans.
o Analyze, review and assess user requirements, specifications and models for testability.
o Create test specifications.
o Set up the test environment (often coordinating with system administration and network
management).
o Prepare and acquire test data.
o Implement tests on all test levels, execute and log the tests, evaluate the results and document
the deviations from expected results.
o Use test administration or management tools and test monitoring tools as required.
o Automate tests (may be supported by a developer or a test automation expert).
o Measure performance of components and systems (if applicable).
o Review tests developed by others.
People who work on test analysis, test design, specific test types or test automation may be
specialists in these roles. Depending on the test level and the risks related to the product and the
project, different people may take over the role of tester, keeping some degree of independence.
Typically testers at the component and integration level would be developers, testers at the
acceptance test level would be business experts and users, and testers for operational acceptance
testing would be operators.
Certified Tester
Foundation Level Syllabus
Version 2007 Page 47 of 76 12-Apr-2007
© International Software Testing Qualifications Board
5.2 Test planning and estimation (K2) 40 minutes
Terms
Test approach
5.2.1 Test planning (K2)
This section covers the purpose of test planning within development and implementation projects,
and for maintenance activities. Planning may be documented in a project or master test plan, and in
separate test plans for test levels, such as system testing and acceptance testing. Outlines of test
planning documents are covered by the ‘Standard for Software Test Documentation’ (IEEE 829).
Planning is influenced by the test policy of the organization, the scope of testing, objectives, risks,
constraints, criticality, testability and the availability of resources. The more the project and test
planning progresses, the more information is available, and the more detail that can be included in
the plan.
Test planning is a continuous activity and is performed in all life cycle processes and activities.
Feedback from test activities is used to recognize changing risks so that planning can be adjusted.
5.2.2 Test planning activities (K2)
Test planning activities may include:
o Determining the scope and risks, and identifying the objectives of testing.
o Defining the overall approach of testing (the test strategy), including the definition of the test
levels and entry and exit criteria.
o Integrating and coordinating the testing activities into the software life cycle activities:
acquisition, supply, development, operation and maintenance.
o Making decisions about what to test, what roles will perform the test activities, how the test
activities should be done, and how the test results will be evaluated.
o Scheduling test analysis and design activities.
o Scheduling test implementation, execution and evaluation.
o Assigning resources for the different activities defined.
o Defining the amount, level of detail, structure and templates for the test documentation.
o Selecting metrics for monitoring and controlling test preparation and execution, defect
resolution and risk issues.
o Setting the level of detail for test procedures in order to provide enough information to support
reproducible test preparation and execution.
5.2.3 Exit criteria (K2)
The purpose of exit criteria is to define when to stop testing, such as at the end of a test level or
when a set of tests has a specific goal.
Typically exit criteria may consist of:
o Thoroughness measures, such as coverage of code, functionality or risk.
o Estimates of defect density or reliability measures.
o Cost.
o Residual risks, such as defects not fixed or lack of test coverage in certain areas.
o Schedules such as those based on time to market.
Certified Tester
Foundation Level Syllabus
Version 2007 Page 48 of 76 12-Apr-2007
© International Software Testing Qualifications Board
5.2.4 Test estimation (K2)
Two approaches for the estimation of test effort are covered in this syllabus:
o The metrics-based approach: estimating the testing effort based on metrics of former or similar
projects or based on typical values.
o The expert-based approach: estimating the tasks by the owner of these tasks or by experts.
Once the test effort is estimated, resources can be identified and a schedule can be drawn up.
The testing effort may depend on a number of factors, including:
o Characteristics of the product: the quality of the specification and other information used for test
models (i.e. the test basis), the size of the product, the complexity of the problem domain, the
requirements for reliability and security, and the requirements for documentation.
o Characteristics of the development process: the stability of the organization, tools used, test
process, skills of the people involved, and time pressure.
o The outcome of testing: the number of defects and the amount of rework required.
5.2.5 Test approaches (test strategies) (K2)
One way to classify test approaches or strategies is based on the point in time at which the bulk of
the test design work is begun:
o Preventative approaches, where tests are designed as early as possible.
o Reactive approaches, where test design comes after the software or system has been
produced.
Typical approaches or strategies include:
o Analytical approaches, such as risk-based testing where testing is directed to areas of greatest
risk.
o Model-based approaches, such as stochastic testing using statistical information about failure
rates (such as reliability growth models) or usage (such as operational profiles).
o Methodical approaches, such as failure-based (including error guessing and fault-attacks),
experienced-based, check-list based, and quality characteristic based.
o Process- or standard-compliant approaches, such as those specified by industry-specific
standards or the various agile methodologies.
o Dynamic and heuristic approaches, such as exploratory testing where testing is more reactive
to events than pre-planned, and where execution and evaluation are concurrent tasks.
o Consultative approaches, such as those where test coverage is driven primarily by the advice
and guidance of technology and/or business domain experts outside the test team.
o Regression-averse approaches, such as those that include reuse of existing test material,
extensive automation of functional regression tests, and standard test suites.
Different approaches may be combined, for example, a risk-based dynamic approach.
The selection of a test approach should consider the context, including:
o Risk of failure of the project, hazards to the product and risks of product failure to humans, the
environment and the company.
o Skills and experience of the people in the proposed techniques, tools and methods.
o The objective of the testing endeavour and the mission of the testing team.
o Regulatory aspects, such as external and internal regulations for the development process.
o The nature of the product and the business.
Certified Tester
Foundation Level Syllabus
Version 2007 Page 49 of 76 12-Apr-2007
© International Software Testing Qualifications Board
5.3 Test progress monitoring and control (K2) 20 minutes
Terms
Defect density, failure rate, test control, test monitoring, test report.
5.3.1 Test progress monitoring (K1)
The purpose of test monitoring is to give feedback and visibility about test activities. Information to
be monitored may be collected manually or automatically and may be used to measure exit criteria,
such as coverage. Metrics may also be used to assess progress against the planned schedule and
budget. Common test metrics include:
o Percentage of work done in test case preparation (or percentage of planned test cases
prepared).
o Percentage of work done in test environment preparation.
o Test case execution (e.g. number of test cases run/not run, and test cases passed/failed).
o Defect information (e.g. defect density, defects found and fixed, failure rate, and retest results).
o Test coverage of requirements, risks or code.
o Subjective confidence of testers in the product.
o Dates of test milestones.
o Testing costs, including the cost compared to the benefit of finding the next defect or to run the
next test.
5.3.2 Test Reporting (K2)
Test reporting is concerned with summarizing information about the testing endeavour, including:
o What happened during a period of testing, such as dates when exit criteria were met.
o Analyzed information and metrics to support recommendations and decisions about future
actions, such as an assessment of defects remaining, the economic benefit of continued
testing, outstanding risks, and the level of confidence in tested software.
The outline of a test summary report is given in ‘Standard for Software Test Documentation’ (IEEE
829).
Metrics should be collected during and at the end of a test level in order to assess:
o The adequacy of the test objectives for that test level.
o The adequacy of the test approaches taken.
o The effectiveness of the testing with respect to its objectives.
5.3.3 Test control (K2)
Test control describes any guiding or corrective actions taken as a result of information and metrics
gathered and reported. Actions may cover any test activity and may affect any other software life
cycle activity or task.
Certified Tester
Foundation Level Syllabus
Version 2007 Page 50 of 76 12-Apr-2007
© International Software Testing Qualifications Board
Examples of test control actions are:
o Making decisions based on information from test monitoring.
o Re-prioritize tests when an identified risk occurs (e.g. software delivered late).
o Change the test schedule due to availability of a test environment.
o Set an entry criterion requiring fixes to have been retested (confirmation tested) by a developer
before accepting them into a build.
Certified Tester
Foundation Level Syllabus
Version 2007 Page 51 of 76 12-Apr-2007
© International Software Testing Qualifications Board
5.4 Configuration management (K2) 10 minutes
Terms
Configuration management, version control.
Background
The purpose of configuration management is to establish and maintain the integrity of the products
(components, data and documentation) of the software or system through the project and product
life cycle.
For testing, configuration management may involve ensuring that:
o All items of testware are identified, version controlled, tracked for changes, related to each
other and related to development items (test objects) so that traceability can be maintained
throughout the test process.
o All identified documents and software items are referenced unambiguously in test
documentation.
For the tester, configuration management helps to uniquely identify (and to reproduce) the tested
item, test documents, the tests and the test harness.
During test planning, the configuration management procedures and infrastructure (tools) should be
chosen, documented and implemented.
Certified Tester
Foundation Level Syllabus
Version 2007 Page 52 of 76 12-Apr-2007
© International Software Testing Qualifications Board
5.5 Risk and testing (K2) 30 minutes
Terms
Product risk, project risk, risk, risk-based testing.
Background
Risk can be defined as the chance of an event, hazard, threat or situation occurring and its
undesirable consequences, a potential problem. The level of risk will be determined by the
likelihood of an adverse event happening and the impact (the harm resulting from that event).
5.5.1 Project risks (K2)
Project risks are the risks that surround the project’s capability to deliver its objectives, such as:
o Organizational factors:
o skill and staff shortages;
o personal and training issues;
o political issues, such as
􀂃 problems with testers communicating their needs and test results;
􀂃 failure to follow up on information found in testing and reviews (e.g. not
improving development and testing practices).
o improper attitude toward or expectations of testing (e.g. not appreciating the value of
finding defects during testing).
o Technical issues:
o problems in defining the right requirements;
o the extent that requirements can be met given existing constraints;
o the quality of the design, code and tests.
o Supplier issues:
o failure of a third party;
o contractual issues.
When analyzing, managing and mitigating these risks, the test manager is following well established
project management principles. The ‘Standard for Software Test Documentation’ (IEEE 829) outline
for test plans requires risks and contingencies to be stated.
5.5.2 Product risks (K2)
Potential failure areas (adverse future events or hazards) in the software or system are known as
product risks, as they are a risk to the quality of the product, such as:
o Failure-prone software delivered.
o The potential that the software/hardware could cause harm to an individual or company.
o Poor software characteristics (e.g. functionality, reliability, usability and performance).
o Software that does not perform its intended functions.
Risks are used to decide where to start testing and where to test more; testing is used to reduce the
risk of an adverse effect occurring, or to reduce the impact of an adverse effect.
Product risks are a special type of risk to the success of a project. Testing as a risk-control activity
provides feedback about the residual risk by measuring the effectiveness of critical defect removal
and of contingency plans.
Certified Tester
Foundation Level Syllabus
Version 2007 Page 53 of 76 12-Apr-2007
© International Software Testing Qualifications Board
A risk-based approach to testing provides proactive opportunities to reduce the levels of product
risk, starting in the initial stages of a project. It involves the identification of product risks and their
use in guiding test planning and control, specification, preparation and execution of tests. In a riskbased
approach the risks identified may be used to:
o Determine the test techniques to be employed.
o Determine the extent of testing to be carried out.
o Prioritize testing in an attempt to find the critical defects as early as possible.
o Determine whether any non-testing activities could be employed to reduce risk (e.g. providing
training to inexperienced designers).
Risk-based testing draws on the collective knowledge and insight of the project stakeholders to
determine the risks and the levels of testing required to address those risks.
To ensure that the chance of a product failure is minimized, risk management activities provide a
disciplined approach to:
o Assess (and reassess on a regular basis) what can go wrong (risks).
o Determine what risks are important to deal with.
o Implement actions to deal with those risks.
In addition, testing may support the identification of new risks, may help to determine what risks
should be reduced, and may lower uncertainty about risks.
Certified Tester
Foundation Level Syllabus
Version 2007 Page 54 of 76 12-Apr-2007
© International Software Testing Qualifications Board
5.6 Incident management (K3) 40 minutes
Terms
Incident logging, incident management.
Background
Since one of the objectives of testing is to find defects, the discrepancies between actual and
expected outcomes need to be logged as incidents. Incidents should be tracked from discovery and
classification to correction and confirmation of the solution. In order to manage all incidents to
completion, an organization should establish a process and rules for classification.
Incidents may be raised during development, review, testing or use of a software product. They may
be raised for issues in code or the working system, or in any type of documentation including
requirements, development documents, test documents, and user information such as “Help” or
installation guides.
Incident reports have the following objectives:
o Provide developers and other parties with feedback about the problem to enable identification,
isolation and correction as necessary.
o Provide test leaders a means of tracking the quality of the system under test and the progress
of the testing.
o Provide ideas for test process improvement.
Details of the incident report may include:
o Date of issue, issuing organization, and author.
o Expected and actual results.
o Identification of the test item (configuration item) and environment.
o Software or system life cycle process in which the incident was observed.
o Description of the incident to enable reproduction and resolution, including logs, database
dumps or screenshots.
o Scope or degree of impact on stakeholder(s) interests.
o Severity of the impact on the system.
o Urgency/priority to fix.
o Status of the incident (e.g. open, deferred, duplicate, waiting to be fixed, fixed awaiting retest,
closed).
o Conclusions, recommendations and approvals.
o Global issues, such as other areas that may be affected by a change resulting from the
incident.
o Change history, such as the sequence of actions taken by project team members with respect
to the incident to isolate, repair, and confirm it as fixed.
o References, including the identity of the test case specification that revealed the problem.
The structure of an incident report is also covered in the ‘Standard for Software Test
Documentation’ (IEEE 829).
References
5.1.1 Black, 2001, Hetzel, 1988
5.1.2 Black, 2001, Hetzel, 1988
5.2.5 Black, 2001, Craig, 2002, IEEE 829, Kaner 2002
5.3.3 Black, 2001, Craig, 2002, Hetzel, 1988, IEEE 829
Certified Tester
Foundation Level Syllabus
Version 2007 Page 55 of 76 12-Apr-2007
© International Software Testing Qualifications Board
5.4 Craig, 2002
5.5.2 Black, 2001 , IEEE 829
5.6 Black, 2001, IEEE 829
Certified Tester
Foundation Level Syllabus
Version 2007 Page 56 of 76 12-Apr-2007
© International Software Testing Qualifications Board
6. Tool support for testing (K2) 80 minutes
Learning objectives for tool support for testing
The objectives identify what you will be able to do following the completion of each module.
6.1 Types of test tool (K2)
LO-6.1.1 Classify different types of test tools according to the test process activities. (K2)
LO-6.1.2 Recognize tools that may help developers in their testing. (K1)
6.2 Effective use of tools: potential benefits and risks (K2)
LO-6.2.1 Summarize the potential benefits and risks of test automation and tool support for
testing. (K2)
LO-6.2.2 Recognize that test execution tools can have different scripting techniques, including
data driven and keyword driven. (K1)
6.3 Introducing a tool into an organization (K1)
LO-6.3.1 State the main principles of introducing a tool into an organization. (K1)
LO-6.3.2 State the goals of a proof-of-concept/piloting phase for tool evaluation. (K1)
LO-6.3.3 Recognize that factors other than simply acquiring a tool are required for good tool
support. (K1)
Certified Tester
Foundation Level Syllabus
Version 2007 Page 57 of 76 12-Apr-2007
© International Software Testing Qualifications Board
6.1 Types of test tool (K2) 45 minutes
Terms
Configuration management tool, coverage tool, debugging tool, dynamic analysis tool, incident
management tool, load testing tool, modelling tool, monitoring tool, performance testing tool, probe
effect, requirements management tool, review tool, security tool, static analysis tool, stress testing
tool, test comparator, test data preparation tool, test design tool, test harness, test execution tool,
test management tool, unit test framework tool.
6.1.1 Test tool classification (K2)
There are a number of tools that support different aspects of testing. Tools are classified in this
syllabus according to the testing activities that they support.
Some tools clearly support one activity; others may support more than one activity, but are
classified under the activity with which they are most closely associated. Some commercial tools
offer support for only one type of activity; other commercial tool vendors offer suites or families of
tools that provide support for many or all of these activities.
Testing tools can improve the efficiency of testing activities by automating repetitive tasks. Testing
tools can also improve the reliability of testing by, for example, automating large data comparisons
or simulating behaviour.
Some types of test tool can be intrusive in that the tool itself can affect the actual outcome of the
test. For example, the actual timing may be different depending on how you measure it with
different performance tools, or you may get a different measure of code coverage depending on
which coverage tool you use. The consequence of intrusive tools is called the probe effect.
Some tools offer support more appropriate for developers (e.g. during component and component
integration testing). Such tools are marked with “(D)” in the classifications below.
6.1.2 Tool support for management of testing and tests (K1)
Management tools apply to all test activities over the entire software life cycle.
Test management tools
Characteristics of test management tools include:
o Support for the management of tests and the testing activities carried out.
o Interfaces to test execution tools, defect tracking tools and requirement management tools.
o Independent version control or interface with an external configuration management tool.
o Support for traceability of tests, test results and incidents to source documents, such as
requirements specifications.
o Logging of test results and generation of progress reports.
o Quantitative analysis (metrics) related to the tests (e.g. tests run and tests passed) and the test
object (e.g. incidents raised), in order to give information about the test object, and to control
and improve the test process.
Certified Tester
Foundation Level Syllabus
Version 2007 Page 58 of 76 12-Apr-2007
© International Software Testing Qualifications Board
Requirements management tools
Requirements management tools store requirement statements, check for consistency and
undefined (missing) requirements, allow requirements to be prioritized and enable individual tests to
be traceable to requirements, functions and/or features. Traceability may be reported in test
management progress reports. The coverage of requirements, functions and/or features by a set of
tests may also be reported.
Incident management tools
Incident management tools store and manage incident reports, i.e. defects, failures or perceived
problems and anomalies, and support management of incident reports in ways that include:
o Facilitating their prioritization.
o Assignment of actions to people (e.g. fix or confirmation test).
o Attribution of status (e.g. rejected, ready to be tested or deferred to next release).
These tools enable the progress of incidents to be monitored over time, often provide support for
statistical analysis and provide reports about incidents. They are also known as defect tracking
tools.
Configuration management tools
Configuration management (CM) tools are not strictly testing tools, but are typically necessary to
keep track of different versions and builds of the software and tests.
Configuration Management tools:
o Store information about versions and builds of software and testware.
o Enable traceability between testware and software work products and product variants.
o Are particularly useful when developing on more than one configuration of the
hardware/software environment (e.g. for different operating system versions, different libraries
or compilers, different browsers or different computers).
6.1.3 Tool support for static testing (K1)
Review tools
Review tools (also known as review process support tools) may store information about review
processes, store and communicate review comments, report on defects and effort, manage
references to review rules and/or checklists and keep track of traceability between documents and
source code. They may also provide aid for online reviews, which is useful if the team is
geographically dispersed.
Static analysis tools (D)
Static analysis tools support developers, testers and quality assurance personnel in finding defects
before dynamic testing. Their major purposes include:
o The enforcement of coding standards.
o The analysis of structures and dependencies (e.g. linked web pages).
o Aiding in understanding the code.
Static analysis tools can calculate metrics from the code (e.g. complexity), which can give valuable
information, for example, for planning or risk analysis.
Certified Tester
Foundation Level Syllabus
Version 2007 Page 59 of 76 12-Apr-2007
© International Software Testing Qualifications Board
Modelling tools (D)
Modelling tools are able to validate models of the software. For example, a database model checker
may find defects and inconsistencies in the data model; other modelling tools may find defects in a
state model or an object model. These tools can often aid in generating some test cases based on
the model (see also Test design tools below).
The major benefit of static analysis tools and modelling tools is the cost effectiveness of finding
more defects at an earlier time in the development process. As a result, the development process
may accelerate and improve by having less rework.
6.1.4 Tool support for test specification (K1)
Test design tools
Test design tools generate test inputs or executable tests from requirements, from a graphical user
interface, from design models (state, data or object) or from code. This type of tool may generate
expected outcomes as well (i.e. may use a test oracle). The generated tests from a state or object
model are useful for verifying the implementation of the model in the software, but are seldom
sufficient for verifying all aspects of the software or system. They can save valuable time and
provide increased thoroughness of testing because of the completeness of the tests that the tool
can generate.
Other tools in this category can aid in supporting the generation of tests by providing structured
templates, sometimes called a test frame, that generate tests or test stubs, and thus speed up the
test design process.
Test data preparation tools
Test data preparation tools manipulate databases, files or data transmissions to set up test data to
be used during the execution of tests. A benefit of these tools is to ensure that live data transferred
to a test environment is made anonymous, for data protection.
6.1.5 Tool support for test execution and logging (K1)
Test execution tools
Test execution tools enable tests to be executed automatically, or semi-automatically, using stored
inputs and expected outcomes, through the use of a scripting language. The scripting language
makes it possible to manipulate the tests with limited effort, for example, to repeat the test with
different data or to test a different part of the system with similar steps. Generally these tools
include dynamic comparison features and provide a test log for each test run.
Test execution tools can also be used to record tests, when they may be referred to as capture
playback tools. Capturing test inputs during exploratory testing or unscripted testing can be useful in
order to reproduce and/or document a test, for example, if a failure occurs.
Test harness/unit test framework tools (D)
A test harness may facilitate the testing of components or part of a system by simulating the
environment in which that test object will run. This may be done either because other components
of that environment are not yet available and are replaced by stubs and/or drivers, or simply to
provide a predictable and controllable environment in which any faults can be localized to the object
under test.
A framework may be created where part of the code, object, method or function, unit or component
can be executed, by calling the object to be tested and/or giving feedback to that object. It can do
this by providing artificial means of supplying input to the test object, and/or by supplying stubs to
take output from the object, in place of the real output targets.
Certified Tester
Foundation Level Syllabus
Version 2007 Page 60 of 76 12-Apr-2007
© International Software Testing Qualifications Board
Test harness tools can also be used to provide an execution framework in middleware, where
languages, operating systems or hardware must be tested together.
They may be called unit test framework tools when they have a particular focus on the component
test level. This type of tool aids in executing the component tests in parallel with building the code.
Test comparators
Test comparators determine differences between files, databases or test results. Test execution
tools typically include dynamic comparators, but post-execution comparison may be done by a
separate comparison tool. A test comparator may use a test oracle, especially if it is automated.
Coverage measurement tools (D)
Coverage measurement tools can be either intrusive or non-intrusive depending on the
measurement techniques used, what is measured and the coding language. Code coverage tools
measure the percentage of specific types of code structure that have been exercised (e.g.
statements, branches or decisions, and module or function calls). These tools show how thoroughly
the measured type of structure has been exercised by a set of tests.
Security tools
Security tools check for computer viruses and denial of service attacks. A firewall, for example, is
not strictly a testing tool, but may be used in security testing. Security testing tools search for
specific vulnerabilities of the system.
6.1.6 Tool support for performance and monitoring (K1)
Dynamic analysis tools (D)
Dynamic analysis tools find defects that are evident only when software is executing, such as time
dependencies or memory leaks. They are typically used in component and component integration
testing, and when testing middleware.
Performance testing/load testing/stress testing tools
Performance testing tools monitor and report on how a system behaves under a variety of simulated
usage conditions. They simulate a load on an application, a database, or a system environment,
such as a network or server. The tools are often named after the aspect of performance that they
measure, such as load or stress, so are also known as load testing tools or stress testing tools.
They are often based on automated repetitive execution of tests, controlled by parameters.
Monitoring tools
Monitoring tools are not strictly testing tools but provide information that can be used for testing
purposes and which is not available by other means.
Monitoring tools continuously analyze, verify and report on usage of specific system resources, and
give warnings of possible service problems. They store information about the version and build of
the software and testware, and enable traceability.
6.1.7 Tool support for specific application areas (K1)
Individual examples of the types of tool classified above can be specialized for use in a particular
type of application. For example, there are performance testing tools specifically for web-based
applications, static analysis tools for specific development platforms, and dynamic analysis tools
specifically for testing security aspects.
Commercial tool suites may target specific application areas (e.g. embedded systems).
Certified Tester
Foundation Level Syllabus
Version 2007 Page 61 of 76 12-Apr-2007
© International Software Testing Qualifications Board
6.1.8 Tool support using other tools (K1)
The test tools listed here are not the only types of tools used by testers – they may also use
spreadsheets, SQL, resource or debugging tools (D), for example.
Certified Tester
Foundation Level Syllabus
Version 2007 Page 62 of 76 12-Apr-2007
© International Software Testing Qualifications Board
6.2 Effective use of tools: potential benefits and
risks (K2)
20 minutes
Terms
Data-driven (testing), keyword-driven (testing), scripting language.
6.2.1 Potential benefits and risks of tool support for testing (for all tools)
(K2)
Simply purchasing or leasing a tool does not guarantee success with that tool. Each type of tool
may require additional effort to achieve real and lasting benefits. There are potential benefits and
opportunities with the use of tools in testing, but there are also risks.
Potential benefits of using tools include:
o Repetitive work is reduced (e.g. running regression tests, re-entering the same test data, and
checking against coding standards).
o Greater consistency and repeatability (e.g. tests executed by a tool, and tests derived from
requirements).
o Objective assessment (e.g. static measures, coverage).
o Ease of access to information about tests or testing (e.g. statistics and graphs about test
progress, incident rates and performance).
Risks of using tools include:
o Unrealistic expectations for the tool (including functionality and ease of use).
o Underestimating the time, cost and effort for the initial introduction of a tool (including training
and external expertise).
o Underestimating the time and effort needed to achieve significant and continuing benefits from
the tool (including the need for changes in the testing process and continuous improvement of
the way the tool is used).
o Underestimating the effort required to maintain the test assets generated by the tool.
o Over-reliance on the tool (replacement for test design or where manual testing would be better).
6.2.2 Special considerations for some types of tool (K1)
Test execution tools
Test execution tools replay scripts designed to implement tests that are stored electronically. This
type of tool often requires significant effort in order to achieve significant benefits.
Capturing tests by recording the actions of a manual tester seems attractive, but this approach does
not scale to large numbers of automated tests. A captured script is a linear representation with
specific data and actions as part of each script. This type of script may be unstable when
unexpected events occur.
Certified Tester
Foundation Level Syllabus
Version 2007 Page 63 of 76 12-Apr-2007
© International Software Testing Qualifications Board
A data-driven approach separates out the test inputs (the data), usually into a spreadsheet, and
uses a more generic script that can read the test data and perform the same test with different data.
Testers who are not familiar with the scripting language can enter test data for these predefined
scripts.
In a keyword-driven approach, the spreadsheet contains keywords describing the actions to be
taken (also called action words), and test data. Testers (even if they are not familiar with the
scripting language) can then define tests using the keywords, which can be tailored to the
application being tested.
Technical expertise in the scripting language is needed for all approaches (either by testers or by
specialists in test automation).
Whichever scripting technique is used, the expected results for each test need to be stored for later
comparison.
Performance testing tools
Performance testing tools need someone with expertise in performance testing to help design the
tests and interpret the results.
Static analysis tools
Static analysis tools applied to source code can enforce coding standards, but if applied to existing
code may generate a lot of messages. Warning messages do not stop the code being translated
into an executable program, but should ideally be addressed so that maintenance of the code is
easier in the future. A gradual implementation with initial filters to exclude some messages would be
an effective approach.
Test management tools
Test management tools need to interface with other tools or spreadsheets in order to produce
information in the best format for the current needs of the organization. The reports need to be
designed and monitored so that they provide benefit.
Certified Tester
Foundation Level Syllabus
Version 2007 Page 64 of 76 12-Apr-2007
© International Software Testing Qualifications Board
6.3 Introducing a tool into an organization (K1) 15 minutes
Terms
No specific terms.
Background
The main considerations in selecting a tool for an organization include:
o Assessment of organizational maturity, strengths and weaknesses and identification of
opportunities for an improved test process supported by tools.
o Evaluation against clear requirements and objective criteria.
o A proof-of-concept to test the required functionality and determine whether the product meets
its objectives.
o Evaluation of the vendor (including training, support and commercial aspects).
o Identification of internal requirements for coaching and mentoring in the use of the tool.
Introducing the selected tool into an organization starts with a pilot project, which has the following
objectives:
o Learn more detail about the tool.
o Evaluate how the tool fits with existing processes and practices, and determine what would
need to change.
o Decide on standard ways of using, managing, storing and maintaining the tool and the test
assets (e.g. deciding on naming conventions for files and tests, creating libraries and defining
the modularity of test suites).
o Assess whether the benefits will be achieved at reasonable cost.
Success factors for the deployment of the tool within an organization include:
o Rolling out the tool to the rest of the organization incrementally.
o Adapting and improving processes to fit with the use of the tool.
o Providing training and coaching/mentoring for new users.
o Defining usage guidelines.
o Implementing a way to learn lessons from tool use.
o Monitoring tool use and benefits.
References
6.2.2 Buwalda, 2001, Fewster, 1999
6.3 Fewster, 1999
Certified Tester
Foundation Level Syllabus
Version 2007 Page 65 of 76 12-Apr-2007
© International Software Testing Qualifications Board
7. References
Standards
ISTQB Glossary of terms used in Software Testing Version 1.0
[CMMI] Chrissis, M.B., Konrad, M. and Shrum, S. (2004) CMMI, Guidelines for Process Integration
and Product Improvement, Addison Wesley: Reading, MA
See Section 2.1
[IEEE 829] IEEE Std 829™ (1998/2007) IEEE Standard for Software Test Documentation (currently
under revision)
See Sections 2.3, 2.4, 4.1, 5.2, 5.3, 5.5, 5.6
[IEEE 1028] IEEE Std 1028™ (1997) IEEE Standard for Software Reviews
See Section 3.2
[IEEE 12207] IEEE 12207/ISO/IEC 12207-1996, Software life cycle processes
See Section 2.1
[ISO 9126] ISO/IEC 9126-1:2001, Software Engineering – Software Product Quality
See Section 2.3
Books
[Beizer, 1990] Beizer, B. (1990) Software Testing Techniques (2nd edition), Van Nostrand Reinhold:
Boston
See Sections 1.2, 1.3, 2.3, 4.2, 4.3, 4.4, 4.6
[Black, 2001] Black, R. (2001) Managing the Testing Process (2nd edition), John Wiley & Sons:
New York
See Sections 1.1, 1.2, 1.4, 1.5, 2.3, 2.4, 5.1, 5.2, 5.3, 5.5, 5.6
[Buwalda, 2001] Buwalda, H. et al. (2001) Integrated Test Design and Automation, Addison Wesley:
Reading, MA
See Section 6.2
[Copeland, 2004] Copeland, L. (2004) A Practitioner’s Guide to Software Test Design, Artech
House: Norwood, MA
See Sections 2.2, 2.3, 4.2, 4.3, 4.4, 4.6
[Craig, 2002] Craig, Rick D. and Jaskiel, Stefan P. (2002) Systematic Software Testing, Artech
House: Norwood, MA
See Sections 1.4.5, 2.1.3, 2.4, 4.1, 5.2.5, 5.3, 5.4
[Fewster, 1999] Fewster, M. and Graham, D. (1999) Software Test Automation, Addison Wesley:
Reading, MA
See Sections 6.2, 6.3
[Gilb, 1993]: Gilb, Tom and Graham, Dorothy (1993) Software Inspection, Addison Wesley:
Reading, MA
See Sections 3.2.2, 3.2.4
[Hetzel, 1988] Hetzel, W. (1988) Complete Guide to Software Testing, QED: Wellesley, MA
See Sections 1.3, 1.4, 1.5, 2.1, 2.2, 2.3, 2.4, 4.1, 5.1, 5.3
[Kaner, 2002] Kaner, C., Bach, J. and Pettticord, B. (2002) Lessons Learned in Software Testing,
Certified Tester
Foundation Level Syllabus
Version 2007 Page 66 of 76 12-Apr-2007
© International Software Testing Qualifications Board
John Wiley & Sons: New York
See Sections 1.1, 4.5, 5.2
[Myers 1979] Myers, Glenford J. (1979) The Art of Software Testing, John Wiley & Sons: New York
See Sections 1.2, 1.3, 2.2, 4.3
[van Veenendaal, 2004] van Veenendaal, E. (ed.) (2004) The Testing Practitioner (Chapters 6, 8,
10),
UTN Publishers: The Netherlands
See Sections 3.2, 3.3
Certified Tester
Foundation Level Syllabus
Version 2007 Page 67 of 76 12-Apr-2007
© International Software Testing Qualifications Board
8. Appendix A – Syllabus background
History of this document
This document was prepared between 2004 and 2007 by a working party comprising members
appointed by the International Software Testing Qualifications Board (ISTQB). It was initially
reviewed by a selected review panel, and then by representatives drawn from the international
software testing community. The rules used in the production of this document are shown in
Appendix C.
This document is the syllabus for the International Foundation Certificate in Software Testing, the
first level international qualification approved by the ISTQB (www.istqb.org).
Objectives of the Foundation Certificate qualification
o To gain recognition for testing as an essential and professional software engineering
specialization.
o To provide a standard framework for the development of testers' careers.
o To enable professionally qualified testers to be recognized by employers, customers and peers,
and to raise the profile of testers.
o To promote consistent and good testing practices within all software engineering disciplines.
o To identify testing topics that are relevant and of value to industry.
o To enable software suppliers to hire certified testers and thereby gain commercial advantage
over their competitors by advertising their tester recruitment policy.
o To provide an opportunity for testers and those with an interest in testing to acquire an
internationally recognized qualification in the subject.
Objectives of the international qualification (adapted from ISTQB
meeting at Sollentuna, November 2001)
o To be able to compare testing skills across different countries.
o To enable testers to move across country borders more easily.
o To enable multinational/international projects to have a common understanding of testing
issues.
o To increase the number of qualified testers worldwide.
o To have more impact/value as an internationally based initiative than from any country-specific
approach.
o To develop a common international body of understanding and knowledge about testing
through the syllabus and terminology, and to increase the level of knowledge about testing for
all participants.
o To promote testing as a profession in more countries.
o To enable testers to gain a recognized qualification in their native language.
o To enable sharing of knowledge and resources across countries.
o To provide international recognition of testers and this qualification due to participation from
many countries.
Entry requirements for this qualification
The entry criterion for taking the ISTQB Foundation Certificate in Software Testing examination is
that candidates have an interest in software testing. However, it is strongly recommended that
candidates also:
Certified Tester
Foundation Level Syllabus
Version 2007 Page 68 of 76 12-Apr-2007
© International Software Testing Qualifications Board
o Have at least a minimal background in either software development or software testing, such as
six months experience as a system or user acceptance tester or as a software developer.
o Take a course that has been accredited to ISTQB standards (by one of the ISTQB-recognized
national boards).
Background and history of the Foundation Certificate in Software
Testing
The independent certification of software testers began in the UK with the British Computer
Society's Information Systems Examination Board (ISEB), when a Software Testing Board was set
up in 1998 (www.bcs.org.uk/iseb). In 2002, ASQF in Germany began to support a German tester
qualification scheme (www.asqf.de). This syllabus is based on the ISEB and ASQF syllabi; it
includes reorganized, updated and some new content, and the emphasis is directed at topics that
will provide the most practical help to testers.
An existing Foundation Certificate in Software Testing (e.g. from ISEB, ASQF or an ISTQBrecognized
national board) awarded before this International Certificate was released, will be
deemed to be equivalent to the International Certificate. The Foundation Certificate does not expire
and does not need to be renewed. The date it was awarded is shown on the Certificate.
Within each participating country, local aspects are controlled by a national ISTQB-recognized
Software Testing Board. Duties of national boards are specified by the ISTQB, but are implemented
within each country. The duties of the country boards are expected to include accreditation of
training providers and the setting of exams.
Certified Tester
Foundation Level Syllabus
Version 2007 Page 69 of 76 12-Apr-2007
© International Software Testing Qualifications Board
9. Appendix B – Learning objectives/level of knowledge
The following learning objectives are defined as applying to this syllabus. Each topic in the syllabus
will be examined according to the learning objective for it.
Level 1: Remember (K1)
The candidate will recognize, remember and recall a term or concept.
Example
Can recognize the definition of “failure” as:
o “non-delivery of service to an end user or any other stakeholder” or
o “actual deviation of the component or system from its expected delivery, service or result”.
Level 2: Understand (K2)
The candidate can select the reasons or explanations for statements related to the topic, and can
summarize, compare, classify, categorize and give examples for the testing concept.
Examples
Can explain the reason why tests should be designed as early as possible:
o To find defects when they are cheaper to remove.
o To find the most important defects first.
Can explain the similarities and differences between integration and system testing:
o Similarities: testing more than one component, and can test non-functional aspects.
o Differences: integration testing concentrates on interfaces and interactions, and system testing
concentrates on whole-system aspects, such as end-to-end processing.
Level 3: Apply (K3)
The candidate can select the correct application of a concept or technique and apply it to a given
context.
Example
o Can identify boundary values for valid and invalid partitions.
o Can select test cases from a given state transition diagram in order to cover all transitions.
Reference
(For the cognitive levels of learning objectives)
Anderson, L. W. and Krathwohl, D. R. (eds) (2001) A Taxonomy for Learning, Teaching, and
Assessing: A Revision of Bloom's Taxonomy of Educational Objectives, Allyn & Bacon:
Certified Tester
Foundation Level Syllabus
Version 2007 Page 70 of 76 12-Apr-2007
© International Software Testing Qualifications Board
10. Appendix C – Rules applied to the ISTQB
Foundation syllabus
The rules listed here were used in the development and review of this syllabus. (A “TAG” is shown
after each rule as a shorthand abbreviation of the rule.)
General rules
SG1. The syllabus should be understandable and absorbable by people with 0 to 6 months (or
more) experience in testing. (6-MONTH)
SG2. The syllabus should be practical rather than theoretical. (PRACTICAL)
SG3. The syllabus should be clear and unambiguous to its intended readers. (CLEAR)
SG4. The syllabus should be understandable to people from different countries, and easily
translatable into different languages. (TRANSLATABLE)
SG5. The syllabus should use American English. (AMERICAN-ENGLISH)
Current content
SC1. The syllabus should include recent testing concepts and should reflect current best practice in
software testing where this is generally agreed. The syllabus is subject to review every three to five
years. (RECENT)
SC2. The syllabus should minimize time-related issues, such as current market conditions, to
enable it to have a shelf life of three to five years. (SHELF-LIFE).
Learning Objectives
LO1. Learning objectives should distinguish between items to be recognized/remembered (cognitive
level K1), items the candidate should understand conceptually (K2) and those which the candidate
should be able to practice/use (K3). (KNOWLEDGE-LEVEL)
LO2. The description of the content should be consistent with the learning objectives. (LOCONSISTENT)
LO3. To illustrate the learning objectives, sample exam questions for each major section should be
issued along with the syllabus. (LO-EXAM)
Overall structure
ST1. The structure of the syllabus should be clear and allow cross-referencing to and from other
parts, from exam questions and from other relevant documents. (CROSS-REF)
ST2. Overlap between sections of the syllabus should be minimized. (OVERLAP)
ST3. Each section of the syllabus should have the same structure. (STRUCTURE-CONSISTENT)
ST4. The syllabus should contain version, date of issue and page number on every page.
(VERSION)
ST5. The syllabus should include a guideline for the amount of time to be spent in each section (to
reflect the relative importance of each topic). (TIME-SPENT)
References
SR1. Sources and references will be given for concepts in the syllabus to help training providers
find out more information about the topic. (REFS)
SR2. Where there are not readily identified and clear sources, more detail should be provided in the
syllabus. For example, definitions are in the Glossary, so only the terms are listed in the syllabus.
(NON-REF DETAIL)
Certified Tester
Foundation Level Syllabus
Version 2007 Page 71 of 76 12-Apr-2007
© International Software Testing Qualifications Board
Sources of information
Terms used in the syllabus are defined in ISTQB’s Glossary of terms used in Software Testing. A
version of the Glossary is available from ISTQB.
A list of recommended books on software testing is also issued in parallel with this syllabus. The
main book list is part of the References section.
Certified Tester
Foundation Level Syllabus
Version 2007 Page 72 of 76 12-Apr-2007
© International Software Testing Qualifications Board
11. Appendix D – Notice to training providers
Each major subject heading in the syllabus is assigned an allocated time in minutes. The purpose of
this is both to give guidance on the relative proportion of time to be allocated to each section of an
accredited course, and to give an approximate minimum time for the teaching of each section.
Training providers may spend more time than is indicated and candidates may spend more time
again in reading and research. A course curriculum does not have to follow the same order as the
syllabus.
The syllabus contains references to established standards, which must be used in the preparation
of training material. Each standard used must be the version quoted in the current version of this
syllabus. Other publications, templates or standards not referenced in this syllabus may also be
used and referenced, but will not be examined.
The specific areas of the syllabus requiring practical exercises are as follows:
4.3 Specification-based or black-box techniques
Practical work (short exercises) should be included covering the four techniques: equivalence
partitioning, boundary value analysis, decision table testing and state transition testing. The lectures
and exercises relating to these techniques should be based on the references provided for each
technique.
4.4 Structure-based or white-box techniques
Practical work (short exercises) should be included to assess whether or not a set of tests achieve
100% statement and 100% decision coverage, as well as to design test cases for given control
flows.
5.6 Incident management
Practical work (short exercise) should be included to cover the writing and/or assessment of an
incident report.
Certified Tester
Foundation Level Syllabus
Version 2007 Page 73 of 76 12-Apr-2007
© International Software Testing Qualifications Board
12. Appendix E – Release Notes Syllabus 2007
1. Learning Objectives (LO) numbered to make it easier to remember them.
2. Wording changed for the following LOs (content and level of LO remains unchanged): 1.1.5,
1.5.1, 2.3.5, 4.1.3, 4.1.3, 4.1.4, 4.3.2, 5.2.2.
3. LO 3.1.4 moved from Chapter 3.3.
4. LO 4.3.1 and 4.4.3 moved from Chapter 4.1. K-Level change in Chapter 4.1
5. LO “Write a test execution schedule for a given set of test cases, considering prioritization,
and technical and logical dependencies. (K3)” moved from section 4.1 to LO 5.2.5.
6. LO-5.2.3 ”Differentiate between conceptually different test approaches such as analytical,
model-based, methodical, process/standard compliant, dynamic/heuristic, consultative and
regression averse. (K2)” added because chapter 5.2 specifies the topic and the LO was
missing.
7. LO-5.2.6 “List test preparation and execution activities that should be considered during test
planning. (K1)” has been added. Before, this list was part of chapter 1.4 and covered by the
LO 1.4.1.
8. Section 4.1 changed to “Test Development Process” from “Identifying test conditions and
designing test cases”.
9. Section 2.1 now discusses the two development models: sequential vs. iterativeincremental.
10. Terms just mentioned in the first chapter of occurrence. Removed in the subsequent
chapters.
11. Terms now all in singular
12. New terms: Fault attack (4.5), incident management (5.6), retesting (1.4), error guessing
(1.5), independence (1.5), iterative-incremental development model (2.1), static testing
(3.1), and static technique (3.1).
13. Removed terms: software, testing, development (of software), test basis, independent
testing, contractual acceptance testing (2.2), retirement (2.4), modification (2.4), migration
(2.4), kick-off (3.2), review meeting (3.2), review process (3.2).
14. "D" designation has been removed from Section 6.1.5 Test data preparation tools.
Certified Tester
Foundation Level Syllabus
Version 2007 Page 74 of 76 12-Apr-2007
© International Software Testing Qualifications Board
13. Index
action word ............................................ 63
alpha testing .....................................22, 24
architecture ...............15, 19, 21, 23, 25, 26
archiving ...........................................16, 27
automation ............................................. 26
benefits of independence....................... 45
benefits of using tool .............................. 62
beta testing .......................................22, 24
black-box technique....................34, 37, 38
black-box test design technique............. 37
black-box testing.................................... 25
bottom-up............................................... 23
boundary value analysis ...................34, 38
bug....................................................10, 11
capture playback tool ............................. 59
captured script ....................................... 62
checklists ....................................30, 31, 58
choosing test technique ......................... 42
code coverage ................25, 26, 34, 40, 57
commercial off the shelf (COTS)............ 21
off-the-shelf............................................ 20
compiler ................................................. 33
complexity.............................11, 33, 48, 58
component integration testing20, 22, 26, 57,
60
component testing20, 22, 24, 26, 34, 39, 40
configuration management ...43, 46, 51, 57
configuration management tool.........57, 58
Configuration management tool ............. 57
confirmation testing...13, 15, 16, 19, 25, 26
contract acceptance testing ................... 24
control flow............................25, 33, 34, 40
coverage 15, 22, 25, 26, 34, 36, 37, 38, 40,
47, 48, 49, 57, 58, 60, 62
coverage tool ....................................57, 60
custom-developed software ................... 24
data flow ................................................ 33
data-driven approach............................. 63
data-driven testing ................................. 62
debugging.......................13, 22, 26, 57, 61
debugging tool ............................22, 57, 61
decision coverage.............................34, 40
decision table testing ........................34, 38
decision testing .................................34, 40
defect10, 11, 12, 13, 14, 16, 17, 19, 22, 23,
25, 26, 28, 29, 30, 31, 32, 33, 35, 37, 38,
39, 41, 42, 43, 45, 47, 48, 49, 52, 53, 54,
57, 58, 59, 60, 69
defect density....................................47, 49
defect tracking tool............................57, 58
development ..8, 10, 11, 12, 13, 14, 17, 19,
20, 22, 23, 26, 29, 30, 33, 36, 42, 45, 47,
48, 51, 52, 54, 59, 60, 67
development model.......................... 19, 20
drawbacks of independence................... 45
driver................................................ 22, 59
dynamic analysis tool ....................... 57, 60
dynamic testing ...............13, 28, 29, 33, 58
embedded system.................................. 60
emergency change................................. 27
enhancement ................................... 24, 27
entry criteria ........................................... 30
equivalence partitioning ................... 34, 38
error ................................10, 11, 17, 41, 48
error guessing ............................ 17, 41, 48
exhaustive testing .................................. 14
exit criteria13, 15, 16, 30, 31, 43, 46, 47, 49
expected result................16, 34, 36, 46, 63
experience-based technique ...... 35, 37, 41
experience-based test design technique 37
exploratory testing...................... 41, 48, 59
factory acceptance testing ..................... 24
failure10, 11, 13, 14, 17, 19, 22, 23, 29, 33,
41, 44, 48, 49, 52, 53, 58, 59, 69
failure rate ........................................ 48, 49
fault .......................................10, 11, 41, 59
fault attack.............................................. 41
field testing....................................... 22, 24
follow-up .......................................... 30, 31
formal review.................................... 28, 30
functional requirement...................... 22, 23
functional specification........................... 25
functional task ........................................ 23
functional test......................................... 25
functional testing .................................... 25
functionality .........22, 23, 25, 47, 52, 62, 64
impact analysis .......................... 19, 27, 36
incident15, 16, 18, 22, 44, 46, 54, 57, 58, 62
incident logging ...................................... 54
incident management................. 46, 54, 57
incident management tool ................ 57, 58
incident report .................................. 44, 54
independence ............................ 17, 45, 46
informal review........................... 28, 30, 31
inspection..............................28, 30, 31, 32
inspection leader.................................... 30
integration13, 20, 21, 22, 23, 26, 33, 38, 39,
40, 43, 46, 57, 60, 69
integration testing20, 21, 22, 23, 26, 33, 38,
43, 57, 60, 69
interoperability testing ............................ 25
introducing a tool into an organization56, 64
Certified Tester
Foundation Level Syllabus
Version 2007 Page 75 of 76 12-Apr-2007
© International Software Testing Qualifications Board
ISO 9126...............................11, 25, 27, 65
development model................................ 20
iterative-incremental development model20
keyword-driven approach....................... 63
keyword-driven testing........................... 62
kick-off ................................................... 30
learning objective... 8, 9, 10, 19, 28, 34, 43,
56, 69, 70
load testing .................................25, 57, 60
load testing tool.................................57, 60
test case ................................................ 36
maintainability testing............................. 25
maintenance testing..........................19, 27
management tool ..................46, 57, 58, 63
maturity.................................16, 30, 36, 64
metric..........................................30, 31, 43
mistake .......................................10, 11, 16
modelling tool......................................... 59
moderator .........................................30, 31
monitoring tool ............................46, 57, 60
non-functional requirement .........19, 22, 23
non-functional testing........................11, 25
objectives for testing .............................. 13
operational acceptance testing .............. 24
operational test ...........................13, 21, 27
patch...................................................... 27
peer review .......................................30, 31
performance testing ..............25, 57, 60, 63
performance testing tool .............57, 60, 63
pesticide paradox................................... 14
portability testing.................................... 25
probe effect............................................ 57
product risk ...........................17, 43, 52, 53
project risk ..................................12, 43, 52
prototyping ............................................. 20
quality 8, 10, 11, 12, 13, 18, 25, 34, 36, 45,
46, 48, 52, 54, 58
rapid application development (RAD)..... 20
Rational Unified Process (RUP)............. 20
recorder ................................................. 30
regression testing......15, 16, 19, 25, 26, 27
Regulation acceptance testing............... 24
reliability..............11, 13, 25, 47, 48, 52, 57
reliability testing ..................................... 25
requirement.........13, 20, 22, 29, 31, 57, 58
requirements management tool ........57, 58
requirements specification ................23, 25
responsibilities ............................22, 28, 30
re-testing.... 26, See confirmation testing,
See confirmation testing
review13, 18, 28, 29, 30, 31, 32, 33, 45, 46,
52, 54, 57, 58, 67, 70
review tool.............................................. 58
review tool.............................................. 57
reviewer ........................................... 30, 31
risk11, 12, 13, 14, 22, 23, 26, 27, 35, 36, 42,
43, 47, 48, 50, 52, 53, 58
risk-based approach............................... 53
risk-based testing....................... 48, 52, 53
risks ......................................11, 22, 47, 52
risks of using tool ................................... 62
robustness testing.................................. 22
roles ................8, 28, 30, 31, 32, 45, 46, 47
root cause ........................................ 10, 11
scribe ............................................... 30, 31
scripting language...................... 59, 62, 63
security ...............24, 25, 33, 45, 48, 57, 60
security testing ................................. 25, 60
security tool...................................... 57, 60
simulators............................................... 22
site acceptance testing........................... 24
software development .......8, 10, 11, 19, 20
software development model ................. 20
special considerations for some types of tool62
test case ................................................ 36
specification-based technique.... 26, 37, 38
specification-based test design technique37
specification-based testing......... 25, 34, 35
stakeholders..12, 13, 16, 17, 23, 37, 43, 53
state transition testing ................ 34, 38, 39
statement coverage................................ 40
statement testing.............................. 34, 40
static analysis................................... 29, 33
static analysis tool28, 33, 57, 58, 59, 60, 63
static technique ................................ 28, 29
static testing ..................................... 13, 29
stress testing.............................. 25, 57, 60
stress testing tool ............................. 57, 60
structural testing....................22, 25, 26, 40
structure-based technique................ 37, 40
structure-based test design technique37, 40
structure-based testing..................... 34, 40
stub .................................................. 22, 59
success factors ...................................... 32
system integration testing ................ 20, 22
system testing .....13, 20, 22, 23, 24, 47, 69
technical review ......................... 28, 30, 31
test analysis ..........................15, 36, 46, 47
test approach ........................36, 46, 48, 49
test approach ................................... 47, 48
test basis................................................ 15
test case 13, 14, 15, 16, 22, 25, 29, 34, 35,
36, 37, 38, 39, 40, 43, 49, 54, 59, 69
test case specification................ 34, 36, 54
test design.............................................. 36
test case .................................... 13, 25, 36
test closure................................. 10, 15, 16
test condition.......................................... 36
Certified Tester
Foundation Level Syllabus
Version 2007 Page 76 of 76 12-Apr-2007
© International Software Testing Qualifications Board
test conditions.................13, 15, 25, 36, 37
test control ............................15, 43, 49, 50
test coverage ..............................15, 47, 48
test data.. 15, 16, 36, 38, 46, 57, 59, 62, 63
test data preparation tool ..................57, 59
test design13, 15, 20, 34, 35, 36, 37, 41, 46,
48, 57, 59, 62
test design specification......................... 43
test design technique ............34, 35, 36, 37
test design tool..................................57, 59
TEST DEVELOPMENT PROCESS............36, 73
test effort................................................ 48
test environment 15, 16, 22, 23, 46, 49, 50,
59
test estimation........................................ 48
test execution13, 15, 16, 29, 33, 36, 41, 43,
56, 57, 59
test execution schedule ......................... 36
test execution tool16, 36, 56, 57, 59, 60, 62
test harness ....................16, 22, 51, 57, 59
test implementation.....................15, 36, 47
test leader.......................17, 43, 45, 46, 54
test leader tasks..................................... 46
test level 19, 20, 22, 25, 26, 27, 34, 38, 40,
42, 43, 46, 47
test log ..................................15, 16, 41, 59
test management ........................43, 57, 58
test management tool .......................57, 63
test manager.................................8, 45, 52
test monitoring ............................46, 49, 50
test objective.......13, 20, 25, 41, 42, 46, 49
test objective.......................................... 13
test oracle .........................................59, 60
test organization .................................... 45
test plan . 15, 16, 29, 43, 46, 47, 51, 52, 53,
73
test planning .......15, 16, 43, 47, 51, 53, 73
test planning activities............................ 47
test procedure...........15, 16, 34, 36, 43, 47
test procedure specification ..............34, 36
test progress monitoring ........................ 49
test report..........................................43, 49
test reporting.....................................43, 49
test script ....................................16, 29, 36
test strategy ..........................15, 46, 47, 48
test suite ................................................ 26
test summary report ........15, 16, 43, 46, 49
test tool classification ............................. 57
test type ................................19, 25, 27, 46
test-driven development......................... 22
tester 10, 13, 17, 30, 35, 39, 41, 43, 45, 46,
51, 62, 67
tester tasks............................................. 46
test-first approach .................................. 22
testing and quality .................................. 11
testing principles .............................. 10, 14
testware ....................15, 16, 46, 51, 58, 60
tool support .....................22, 29, 40, 56, 62
tool support for management of testing and
tests................................................... 57
tool support for performance and monitoring 60
tool support for specific application areas60
tool support for static testing .................. 58
tool support for test execution and logging59
tool support for test specification............ 59
tool support for testing...................... 56, 62
tool support using other tool................... 61
top-down ................................................ 23
traceability...........34, 36, 46, 51, 57, 58, 60
transaction processing sequences ......... 23
types of test tool............................... 56, 57
unit test framework................22, 57, 59, 60
unit test framework tool .............. 57, 59, 60
upgrades................................................ 27
usability.....................11, 24, 25, 43, 45, 52
usability testing ................................ 25, 43
use case test.................................... 34, 38
use case testing ......................... 34, 38, 39
use cases..............................20, 23, 25, 39
user acceptance testing ................... 22, 24
validation................................................ 20
verification.............................................. 20
version control.................................. 51, 57
V-model ................................................. 20
walkthrough................................ 28, 30, 31
white-box test design technique ....... 37, 40
white-box testing .............................. 25, 40