Monday, January 14, 2008

Glossary of Testing Terms and Definitions

Glossary of Testing Terms and Definitions

This testing glossary is intended to provide a set of common terms and definitions as used in IBM’s testing methodology. These definitions have origin in many different industry standards and sources such as British Standards Institute, IEEE, and other IBM program development documents. Many of these terms are in common use and therefore may have a slightly different meaning elsewhere. If more than one definition is in common use, they have been included where appropriate.

Acceptance Criteria The definition of the results expected from the test cases used for acceptance testing. The product must meet these criteria before implementation can be approved.
Acceptance Testing (1) Formal testing conducted to determine whether or not a system satisfies its acceptance criteria and to enable the client to determine whether or not to accept the system. (2) Formal testing conducted to enable a user, client, or other authorized entity to determine whether to accept a system or component.
Acceptance Test Plan Describes the steps the client will use to verify that the constructed system meets the acceptance criteria. It defines the approach to be taken for acceptance testing activities. The plan identifies the items to be tested, the test objectives, the acceptance criteria, the testing to be performed, test schedules, entry/exit criteria, staff requirements, reporting requirements, evaluation criteria, and any risks requiring contingency planning.
Adhoc Testing A loosely structured testing approach that allows test developers to be creative in their test selection and execution. Adhoc testing is targeted at known or suspected problem areas.
Audit and Controls Testing A functional type of test that verifies the adequacy and effectiveness of controls and completeness of data processing results.

Auditability A test focus area defined as the ability to provide supporting evidence to trace processing of data.

Backup and Recovery Testing A structural type of test that verifies the capability of the application to be restarted after a failure.
Black Box Testing Evaluation techniques that are executed without knowledge of the program’s implementation. The tests are based on an analysis of the specification of the component without reference to its internal workings.
Bottom-up Testing Approach to integration testing where the lowest level components are tested first then used to facilitate the testing of higher level components. This process is repeated until the component at the top of the hierarchy is tested. See "Top-down".

Boundary Value Analysis A test case selection technique that selects test data that lie along "boundaries" or extremes of input and output possibilities. Boundary Value Analysis can apply to parameters, classes, data structures, variables, loops, etc.

Branch Testing A white box testing technique that requires each branch or decision point to be taken once.
Build (1) An operational version of a system or component that incorporates a specified subset of the capabilities that the final product will provide. Builds are defined whenever the complete system cannot be developed and delivered in a single increment. (2) A collection of programs within a system that are functionally independent. A build can be tested as a unit and can be installed independent of the rest of the system.
Business Function A set of related activities that comprise a stand-alone unit of business. It may be defined as a process that results in the achievement of a business objective. It is characterized by well-defined start and finish activities and a workflow or pattern.
Capability Maturity Model (CMM) A model of the stages through which software organizations progress as they define, implement, evolve, and improve their software process. This model provides a guide for selecting process improvement strategies by determining current process capabilities and identifying the issues most critical to software quality and process improvement. This concept was developed by the Software Engineering Institute (SEI) at Carnegie Mellon University.

Causal Analysis The evaluation of the cause of major errors, to determine actions that will prevent reoccurrence of similar errors.

Change Control The process, by which a change is proposed, evaluated, approved or rejected, scheduled, and tracked.

Change
Management A process methodology to identify the configuration of a release and to manage all changes through change control, data recording, and updating of baselines.

Change Request A documented proposal for a change of one or more work items or work item parts.
Condition Testing A white box test method that requires all decision conditions be executed once for true and once for false.
Configuration Management (1) The process of identifying and defining the configuration items in a system, controlling the release and change of these items throughout the system life cycle, recording and reporting the status of configuration items and change requests, and verifying the completeness and correctness of configuration items.
(2) A discipline applying technical and administrative direction and surveillance to (a) identify and document the functional and physical characteristics of a configuration items, (b) control changes to those characteristics, and (c) record and report change processing and implementation status.
Conversion testing A functional type of test that verifies the compatibility of converted programs, data and procedures with the “old” ones that are being converted or replaced.
Coverage The extent to which test data tests a program’s functions, parameters, inputs, paths, branches, statements, conditions, modules or data flow paths.
Coverage Matrix Documentation procedure to indicate the testing coverage of test cases com¬pared to poss¬ible elements of a program environ¬ment (i.e. inputs, outputs, parameters, paths, cause-effects, equivalence partitioning, etc.).

Continuity of Processing A test focus area defined as the ability to continue processing if problems occur. Included is the ability to backup and recover after a failure.
Correctness A test focus area defined as the ability to process data according to prescribed rules. Controls over transactions and data field edits provide an assurance on accuracy and completeness of data.

Data flow Testing Testing in which test cases are designed based on variable usage within the code.
Debugging The process of locating, analyzing, and correcting suspected faults. Compare with testing.
Decision Coverage Percentage of decision outcomes that have been exercised through (white box) testing.
Defect A variance from expectations. See also Fault.
Defect Management A set of processes to manage the tracking and fixing of defects found during testing and to perform causal analysis.
Documentation and Procedures Testing
A functional type of test that verifies that the interface between the system and the people works and is usable. It also verifies that the instruction guides are helpful and accurate.
Design Review (1) A formal meeting at which the preliminary or detailed design of a system is presented to the user, customer or other interested parties for comment and approval. (2) The formal review of an existing or proposed design for the purpose of detection and remedy of design deficiencies that could affect fitness-for-use and environmental aspects of the product, process or service, and/or for identification of potential improvements of performance, safety and economic aspects.
Desk Check Testing of software by the manual simulation of its execution. It is one of the static testing techniques.
Detailed Test Plan The detailed plan for a specific level of dynamic testing. It defines what is to be tested and how it is to be tested. The plan typically identifies the items to be tested, the test objectives, the testing to be performed, test schedules, personnel requirements, reporting requirements, evaluation criteria, and any risks requiring contingency planning. It also includes the testing tools and techniques, test environment set up, entry and exit criteria, and administrative procedures and controls.

Driver A program that exercises a system or system component by simulating the activity of a higher level component.
Dynamic Testing Testing that is carried out by executing the code. Dynamic testing is a process of validation by exercizing a work product and observing the behavior of its logic and its response to inputs.
Entry Criteria A checklist of activities or work items that must be complete or exist, respective¬ly, before the start of a given task within an activity or sub-activity.

Environment See Test Environment.
Equivalence Partitioning Portion of the component’s input or output domains for which the component’s behavior is assumed to be the same from the component’s specification.
Error (1) A discrepancy between a computed observed or measured value or condition and the true specified or theoretically correct value or condi¬tion. (2) A human action that results in soft¬ware containing a fault. This includes omissions or misinterpretations, etc. See Variance.
Error Guessing A test case selection process that identifies test cases based on the knowl¬edge and ability of the individual to antici¬pate probable errors.

Error Handling Testing
A functional type of test that verifies the system functions for detecting and responding to exception conditions. Completeness of error handling determines the usability of a system and ensures that incorrect transactions are properly handled.
Execution
Procedure A sequence of manual or automated steps required to carry out part or all of a test design or execute a set of test cases.
Exit Criteria (1) Actions that must happen before an activity is considered complete. (2) A checklist of activities or work items that must be complete or exist, respective¬ly, prior to the end of a given process stage, activity, or sub-activity.

Expected Results Predicted output data and file conditions associated with a particular test case. Expected results, if achieved, will indicate whether the test was successful or not. Gener¬ated and docu¬mented with the test case prior to execution of the test.

Fault (1) An accidental condition that causes a functional unit to fail to perform its required functions (2) A manifestation of an error in software. A fault if encountered may cause a failure. Synonymous with bug.

Full Lifecycle Testing
The process of verifying the consist¬ency, completeness, and correctness of soft¬ware and related work products (such as documents and processes) at each stage of the development life cycle.

Function (1) A specific purpose of an entity or its characteristic action. (2) A set of related control statements that perform a related operation. Functions are sub-units of modules.

Function Testing
A functional type of test, which verifies that each business function, operates according to the detailed requirements, the external and internal design specifications.

Functional Testing Selecting and executing test cases based on specified function require¬ments without knowledge or regard of the pro¬gram structure. Also known as black box testing. See "Black Box Testing".

Functional Test
Types Those kinds of tests used to assure that the system meets the business requirements, including business functions, interfaces, usability, audit & controls, and error handling etc. See also Structural Test Types.
Implementation (1) A realization of an abstraction in more concrete terms; in particular, in terms of hardware, software, or both. (2) The process by which software release is installed in production and made available to end users.

Inspection (1) A group review quality improvement process for written material, consisting of two aspects: product (document itself) improvement and process improvement (of both document production and inspection). (2) A formal evaluation technique in which software requirements, design, or code are examined in detail by a person or group other than the author to detect faults, violations of development standards, and other problems. Contrast with walk-through.
Installation Testing A functional type of test which verifies that the hardware, software and applications can be easily installed and run in the target environment.

Integration Testing A level of dynamic testing which verifies the proper execution of application components and does not require that the application under test interface with other applications.

Interface / Inter-system Testing A functional type of test which verifies that the interconnection between applications and systems functions correctly.
JAD An acronym for Joint Application Design. Formal session(s) involving clients and developers used to develop and document consensus on work products, such as client requirements, design specifications, etc.
Level of Testing Refers to the progression of software testing through static and dynamic testing.
Examples of static testing levels are Project Objectives Review, Requirements Walkthrough, Design (External and Internal) Review, and Code Inspection.
Examples of dynamic testing levels are: Unit Testing, Integration Testing, System Testing, Acceptance Testing, Systems Integration Testing and Operability Testing.
Also known as a test level.
Lifecycle The software development process stages. Requirements, Design, Construction (Code/Program, Test), and Implementation.

Logical Path A path that begins at an entry or decision statement and ends at a decision statement or exit.
Maintainability A test focus area defined as the ability to locate and fix an error in the system. Can also be the ability to make dynamic changes to the system environment without making system changes.

Master Test Plan A plan that addresses testing from a high-level system viewpoint. It ties together all levels of testing (unit test, integration test, system test, acceptance test, systems integration, and operability). It includes test objectives, test team organization and responsibilities, high-level schedule, test scope, test focus, test levels and types, test facility requirements, and test management procedures and controls.
Operability A test focus area defined as the effort required (of support personnel) to learn and operate a manual or automated system. Contrast with Usability.

Operability Testing A level of dynamic testing in which the oper¬ations of the system are validated in the real or closely simulated production environ¬ment. This includes verification of produc¬tion JCL, installation procedures and operations proc¬edures. Operability Testing con¬siders such fac¬tors as performance, resource con¬sumption, adherence to standards, etc. Operability Testing is normally performed by Operations to assess the readiness of the system for implementation in the produc¬tion environment.

Operational Testing A structural type of test that verifies the ability of the application to operate at an acceptable level of service in the production-like environment.
Parallel Testing A functional type of test, which verifies that the same input on “old” and “new” systems, produces the same results. It is more of an implementation that a testing strategy.
Path Testing A white box testing technique that requires all code or logic paths to be executed once. Complete path testing is usually impractical and often uneconomical.
Performance A test focus area defined as the ability of the system to perform certain functions within a prescribed time.

Performance Testing A structural type of test which verifies that the application meets the expected level of performance in a production-like environment.

Portability A test focus area defined as ability for a system to operate in multiple operating environments.
Problem (1) A call or report from a user. The call or report may or may not be defect oriented. (2) A software or process deficiency found during development. (3) The inhibitors and other factors that hinder an organization’s ability to achieve its goals and critical success factors. (4) An issue that a project manager has the authority to resolve without escalation. Compare to ‘defect’ or ‘error’.
Quality Plan A document which describes the organization, activities, and project factors that have been put in place to achieve the target level of quality for all work products in the application domain. It defines the approach to be taken when planning and tracking the quality of the application development work products to ensure conformance to specified requirements and to ensure the client’s expectations are met. A
Regression Testing A functional type of test, which verifies that changes to one part of the system have not caused unintended adverse effects to other parts.

Reliability A test focus area defined as the extent to which the system will provide the intended function without failing.

Requirement (1) A condition or capability needed by the user to solve a problem or achieve an objective. (2) A condition or capability that must be met or possessed by a system or system component to satisfy a contract, standard, specification, or other formally imposed document. The set of all requirements forms the basis for subsequent development of the system or system component.
Review A process or meeting during which a work product, or set of work products, is presented to project personnel, managers, users or other interested parties for comment or approval.
Root Cause Analysis See Causal Analysis.
Scaffolding Temporary programs may be needed to create or receive data from the speci¬fic pro¬gram under test. This approach is called scaf¬fold¬ing.

Security A test focus area defined as the assurance that the system/data resources will be protected against accidental and/or intentional modification or misuse.

Security Testing A structural type of test which verifies that the application provides an adequate level of protection for confidential information and data belonging to other systems.
Software Quality (1) The totality of features and characteristics of a software product that bear on its ability to satisfy given needs; for example, conform to specifications. (2)The degree to which software possesses a desired combination of attributes. (3)The degree to which a customer or user perceives that software meets his or her composite expectations. (4)The composite characteristics of software that determine the degree to which the software in use will meet the expectations of the customer.
Software Reliability (1) The probability that software will not cause the failure of a system for a specified time under specified conditions. The probability is a function of the inputs to and use of the system as well as a function of the existence of faults in the software. The inputs to the system determine whether existing faults, if any, are encountered. (2) The ability of a program to perform a required function under stated conditions for a stated period of time.
Statement Testing A white box testing technique that requires all code or logic statements to be executed at least once.
Static Testing (1) The detailed examination of a work product's characteristics to an expected set of attributes, experiences and standards. The product under scrutiny is static and not exercized and therefore its behaviour to changing inputs and environments cannot be assessed. (2) The process of evaluating a program without executing the program. See also desk checking, inspection, walk-through.

Stress / Volume Testing A structural type of test that verifies that the application has acceptable performance characteristics under peak load conditions.
Structural Function Structural functions describe the technical attributes of a system.
Structural Test Types Those kinds of tests that may be used to assure that the system is techni¬cally sound.

Stub (1) A dummy program element or module used during the development and testing of a higher level element or module. (2) A program statement substituting for the body of a program unit and indicating that the unit is or will be defined elsewhere.
The inverse of Scaffolding.
Sub-system (1) A group of assemblies or components or both combined to perform a single function. (2) A group of functionally related components that are defined as elements of a system but not separately packaged.
System A collection of components organized to accomplish a specific function or set of functions.

Systems Integration Testing A dynamic level of testing which ensures that the systems integration activities properly address the integration of application subsystems, integration of applications with the infrastructure, and impact of change on the current live environment.
System Testing A dynamic level of testing in which all the components that comprise a system are tested to verify that the system functions together as a whole.
Test Bed (1) A test environment contaning the hardware, instrumentation tools, simulators, and other support software necessary for testing a system or system component. (2) A set of test files, (including databases and reference files), in a known state, used with input test data to test one or more test conditions, measuring against expected results.
Test Case (1) A set of test inputs, execution conditions, and expected results developed for a particular objective, such as to exercize a particular program path or to verify compliance with a specific requirement. (2) The detailed objectives, data, pro¬cedures and expected results to conduct a test or part of a test.

Test Condition A functional or structural attribute of an application, system, network, or component thereof to be tested.
Test Conditions Matrix A worksheet used to formulate the test conditions that, if met, will produce the expected result. It is a tool used to assist in the design of test cases.

Test Conditions Coverage Matrix A worksheet that is used for planning and for illustrating that all test conditions are covered by one or more test cases. Each test set has a Test Conditions Coverage Matrix. Rows are used to list the test conditions and columns are used to list all test cases in the test set.
Test Coverage Matrix A worksheet used to plan and cross check to ensure all requirements and functions are covered adequately by test cases.
Test Data The input data and file conditions associated with a specific test case.
Test Environment The external conditions or factors that can directly or indirectly influence the execution and results of a test. This includes the physical as well as the operational environments. Examples of what is included in a test environ¬ment are: I/O and storage devices, data files, programs, JCL, communication lines, access control and security, databases, reference tables and files (version controlled), etc.

Test Focus Areas Those attributes of an application that must be tested in order to assure that the business and structural requirements are satisfied.
Test Level See Level of Testing.
Test Log A chronological record of all relevant details of a testing activity
Test Matrices A collection of tables and matrices used to relate functions to be tested with the test cases that do so. Worksheets used to assist in the design and verification of test cases.
Test Objectives The tangible goals for assur¬ing that the Test Focus areas previously selected as being relevant to a particular Business or Struc¬tural Function are being validated by the test.

Test Plan A document prescribing the approach to be taken for intended testing activities. The plan typically identifies the items to be tested, the test objectives, the testing to be performed, test schedules, entry / exit criteria, personnel requirements, reporting requirements, evaluation criteria, and any risks requiring contingency planning.

Test Procedure Detailed instructions for the setup, operation, and evaluation of results for a given test. A set of associated procedures is often combined to form a test procedures document.
Test Report A document describing the conduct and results of the testing carried out for a system or system component.
Test Run A dated, time-stamped execution of a set of test cases.
Test Scenario A high-level description of how a given business or technical requirement will be tested, including the expected outcome; later decomposed into sets of test conditions, each in turn, containing test cases.
Test Script A sequence of actions that executes a test case. Test scripts include detailed instructions for set up, execution, and evaluation of results for a given test case.

Test Set A collection of test conditions. Test sets are created for purposes of test execution only. A test set is created such that its size is manageable to run and its grouping of test conditions facilitates testing. The grouping reflects the application build strategy.
Test Sets Matrix A worksheet that relates the test conditions to the test set in which the condition is to be tested. Rows list the test conditions and columns list the test sets. A checkmark in a cell indicates the test set will be used for the corresponding test condition.
Test Specification A set of documents that define and describe the actual test architecture, elements, approach, data and expected results. Test Specification uses the various functional and non-functional requirement documents along with the quality and test plans. It provides the complete set of test cases and all supporting detail to achieve the objectives documented in the detailed test plan.
Test Strategy A high level description of major system-wide activities which collectively achieve the overall desired result as expressed by the testing objectives, given the constraints of time and money and the target level of quality. It outlines the approach to be used to ensure that the critical attributes of the system are tested adequately.
Test Type See Type of Testing.
Testability (1) The extent to which software facilitates both the establishment of test criteria and the evaluation of the software with respect to those criteria. (2) The extent to which the definition of requirements facilitates analysis of the requirements to establish test criteria.
Testing The process of exercising or evaluating a program, product, or system, by manual or automated means, to verify that it satisfies specified requirements, to identify differences between expected and actual results.
Testware The elements that are produced as part of the testing process. Testware includes plans, designs, test cases, test logs, test reports, etc.
Top-down Approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.
Transaction Flow Testing A functional type of test that verifies the proper and complete processing of a transaction from the time it enters the system to the time of its completion or exit from the system.
Type of Testing Tests a functional or structural attribute of the system. E.g. Error Handling, Usability. (Also known as test type.)
Unit Testing The first level of dynamic testing and is the verification of new or changed code in a module to determine whether all new or modified paths function correctly.

Usability A test focus area defined as the end-user effort required to learn and use the system. Contrast with Operability.

Usability Testing A functional type of test which verifies that the final product is user-friendly and easy to use.
User Acceptance Testing See Acceptance Testing.
Validation (1) The act of demonstrating that a work item is in compliance with the original require¬ment. For example, the code of a module would be validated against the input requirements it is intended to imple¬ment. Validation answers the question "Is the right system being built?” (2) Confirmation by examination and provision of objective evidence that the particular requirements for a specific intended use have been fulfilled. See "Verifica¬tion".

Variance A mismatch between the actual and expected results occurring in testing. It may result from errors in the item being tested, incor¬rect expected results, invalid test data, etc. See "Error".

Verification (1) The act of demonstrating that a work item is satisfactory by using its predecessor work item. For example, code is verified against module level design. Verification answers the question "Is the system being built right?” (2) Confirmation by examination and provision of objective evidence that specified requirements have been fulfilled. See "Validation".

Walkthrough A review technique characterized by the author of the object under review guiding the progression of the review. Observations made in the review are documented and addressed. Less formal evaluation technique than an inspection.
White Box Testing Evaluation techniques that are executed with the knowledge of the implementation of the program. The objective of white box testing is to test the program's state¬ments, code paths, conditions, or data flow paths.
Work Item A software development lifecycle work product.
Work Product (1) The result produced by performing a single task or many tasks. A work product, also known as a project artifact, is part of a major deliverable that is visible to the client. Work products may be internal or external. An internal work product may be produced as an intermediate step for future use within the project, while an external work product is produced for use outside the project as part of a major deliverable. (2) As related to test, software deliverable that is the object of a test, a test work item.

1 comment:

Unknown said...

Good article. I bookmarked your blog..Keep posting more info about software testing
full life cycle testing