Verification and Validation (V&V) is the process of checking that a product, service, or system meets specifications and that it fulfills its intended purpose. These are critical components of a quality management system such as ISO 9000.
Verification n is a quality process used to evaluate whether or not a product, service, or system complies with a regulation or specification. This is often an internal process.
Validation is the process of establishing documented evidence that provides a high degree of assurance that a product, service, or system accomplishes its intended function. This often involves acceptance and suitability with external customers.
It is sometimes said that validation ensures that ‘you built the right thing’ and verification ensures that ‘you built it right’. It is required to have written requirements for both as well as formal procedures or protocols for determining compliance. Documentation is critica
V. VERIFICATION AND VALIDATION
A. Concepts and Definitions
Software Verification and Validation (V&V) is the process of
ensuring that software being developed or changed will
satisfy functional and other requirements (validation) and
each step in the process of building the software yields the
right products (verification). The differences between
verification and validation are unimportant except to the
theorist; practitioners use the term V&V to refer to all of
the activities that are aimed at making sure the software
will function as required.
V&V is intended to be a systematic and technical evaluation
of software and associated products of the development and
maintenance processes. Reviews and tests are done at the
end of each phase of the development process to ensure
software requirements are complete and testable and that
design, code, documentation, and data satisfy those
requirements.
B. Activities
The two major V&V activities are reviews, including
inspections and walkthroughs, and testing.
1. Reviews, Inspections, and Walkthroughs
Reviews are conducted during and at the end of each phase of
the life cycle to determine whether established
requirements, design concepts, and specifications have been
met. Reviews consist of the presentation of material to a
review board or panel. Reviews are most effective when
conducted by personnel who have not been directly involved
in the development of the software being reviewed.
Informal reviews are conducted on an as-needed basis. The
developer chooses a review panel and provides and/or
presents the material to be reviewed. The material may be
as informal as a computer listing or hand-written
documentation.
Formal reviews are conducted at the end of each life cycle
phase. The acquirer of the software appoints the formal
review panel or board, who may make or affect a go/no-go
decision to proceed to the next step of the life cycle.
Formal reviews include the Software Requirements Review, the
Software Preliminary Design Review, the Software Critical
Design Review, and the Software Test Readiness Review.
An inspection or walkthrough is a detailed examination of a
product on a step-by-step or line-of-code by line-of-code
basis. The purpose of conducting inspections and
walkthroughs is to find errors. The group that does an
inspection or walkthrough is composed of peers from
development, test, and quality assurance.
2. Testing
Testing is the operation of the software with real or
simulated inputs to demonstrate that a product satisfies its
requirements and, if it does not, to identify the specific
differences between expected and actual results. There are
varied levels of software tests, ranging from unit or
element testing through integration testing and performance
testing, up to software system and acceptance tests.
a. Informal Testing
Informal tests are done by the developer to measure the
development progress. "Informal" in this case does not mean
that the tests are done in a casual manner, just that the
acquirer of the software is not formally involved, that
witnessing of the testing is not required, and that the
prime purpose of the tests is to find errors. Unit,
component, and subsystem integration tests are usually
informal tests.
Informal testing may be requirements-driven or design-
driven. Requirements-driven or black box testing is done by
selecting the input data and other parameters based on the
software requirements and observing the outputs and
reactions of the software. Black box testing can be done at
any level of integration. In addition to testing for
satisfaction of requirements, some of the objectives of
requirements-driven testing are to ascertain:
Computational correctness.
Proper handling of boundary conditions, including
extreme inputs and conditions that cause extreme outputs.
State transitioning as expected.
Proper behavior under stress or high load.
Adequate error detection, handling, and recovery.
Design-driven or white box testing is the process where the
tester examines the internal workings of code. Design-
driven testing is done by selecting the input data and other
parameters based on the internal logic paths that are to be
checked. The goals of design-driven testing include
ascertaining correctness of:
All paths through the code. For most software
products, this can be feasibly done only at the unit test
level.
Bit-by-bit functioning of interfaces.
Size and timing of critical elements of code.
b. Formal Tests
Formal testing demonstrates that the software is ready for
its intended use. A formal test should include an acquirer-
approved test plan and procedures, quality assurance
witnesses, a record of all discrepancies, and a test report.
Formal testing is always requirements-driven, and its
purpose is to demonstrate that the software meets its
requirements.
Each software development project should have at least one
formal test, the acceptance test that concludes the
development activities and demonstrates that the software is
ready for operations.
In addition to the final acceptance test, other formal
testing may be done on a project. For example, if the
software is to be developed and delivered in increments or
builds, there may be incremental acceptance tests. As a
practical matter, any contractually required test is usually
considered a formal test; others are "informal."
After acceptance of a software product, all changes to the
product should be accepted as a result of a formal test.
Post acceptance testing should include regression testing.
Regression testing involves rerunning previously used
acceptance tests to ensure that the change did not disturb
functions that have previously been accepted.
C. Verification and Validation During the Software
Acquisition Life Cycle
The V&V Plan should cover all V&V activities to be performed
during all phases of the life cycle. The V&V Plan Data Item
Description (DID) may be rolled out of the Product Assurance
Plan DID contained in the SMAP Management Plan Documentation
Standard and DID.
1. Software Concept and Initiation Phase
The major V&V activity during this phase is to develop a
concept of how the system is to be reviewed and tested.
Simple projects may compress the life cycle steps; if so,
the reviews may have to be compressed. Test concepts may
involve simple generation of test cases by a user
representative or may require the development of elaborate
simulators and test data generators. Without an adequate
V&V concept and plan, the cost, schedule, and complexity of
the project may be poorly estimated due to the lack of
adequate test capabilities and data.
2. Software Requirements Phase
V&V activities during this phase should include:
Analyzing software requirements to determine if they
are consistent with, and within the scope of, system
requirements.
Assuring that the requirements are testable and capable
of being satisfied.
Creating a preliminary version of the Acceptance Test
Plan, including a verification matrix, which relates
requirements to the tests used to demonstrate that
requirements are satisfied.
Beginning development, if needed, of test beds and test
data generators.
The phase-ending Software Requirements Review (SRR).
3. Software Architectural (Preliminary) Design Phase
V&V activities during this phase should include:
Updating the preliminary version of the Acceptance Test
Plan and the verification matrix.
Conducting informal reviews and walkthroughs or
inspections of the preliminary software and data base
designs.
The phase-ending Preliminary Design Review (PDR) at
which the allocation of requirements to the software
architecture is reviewed and approved.
4. Software Detailed Design Phase
V&V activities during this phase should include:
Completing the Acceptance Test Plan and the
verification matrix, including test specifications and
unit test plans.
Conducting informal reviews and walkthroughs or
inspections of the detailed software and data base
designs.
The Critical Design Review (CDR) which completes the
software detailed design phase.
5. Software Implementation Phase
V&V activities during this phase should include:
Code inspections and/or walkthroughs.
Unit testing software and data structures.
Locating, correcting, and retesting errors.
Development of detailed test procedures for the next
two phases.
6. Software Integration and Test Phase
This phase is a major V&V effort, where the tested units
from the previous phase are integrated into subsystems and
then the final system. Activities during this phase should
include:
Conducting tests per test procedures.
Documenting test performance, test completion, and
conformance of test results versus expected results.
Providing a test report that includes a summary of
nonconformances found during testing.
Locating, recording, correcting, and retesting
nonconformances.
The Test Readiness Review (TRR), confirming the
product's readiness for acceptance testing.
7. Software Acceptance and Delivery Phase
V&V activities during this phase should include:
By test, analysis, and inspection, demonstrating that
the developed system meets its functional, performance,
and interface requirements.
Locating, correcting, and retesting nonconformances.
The phase-ending Acceptance Review (AR).
8. Software Sustaining Engineering and Operations Phase
Any V&V activities conducted during the prior seven phases
are conducted during this phase as they pertain to the
revision or update of the software.
D. Independent Verification and Validation
Independent Verification and Validation (IV&V) is a process
whereby the products of the software development life cycle
phases are independently reviewed, verified, and validated
by an organization that is neither the developer nor the
acquirer of the software. The IV&V agent should have no
stake in the success or failure of the software. The IV&V
agent's only interest should be to make sure that the
software is thoroughly tested against its complete set of
requirements.
The IV&V activities duplicate the V&V activities step-by-
step during the life cycle, with the exception that the IV&V
agent does no informal testing. If there is an IV&V agent,
the formal acceptance testing may be done only once, by the
IV&V agent. In this case, the developer will do a formal
demonstration that the software is ready for formal
acceptance.
E. Techniques and Tools
Perhaps more tools have been developed to aid the V&V of
software (especially testing) than any other software
activity. The tools available include code tracers, special
purpose memory dumpers and formatters, data generators,
simulations, and emulations. Some tools are essential for
testing any significant set of software, and, if they have
to be developed, may turn out to be a significant cost and
schedule driver.
An especially useful technique for finding errors is the
formal inspection. Formal inspections were developed by
Michael Fagan of IBM. Like walkthroughs, inspections
involve the line-by-line evaluation of the product being
reviewed. Inspections, however, are significantly different
from walkthroughs and are significantly more effective.
Inspections are done by a team, each member of which has a
specific role. The team is led by a moderator, who is
formally trained in the inspection process. The team
includes a reader, who leads the team through the item; one
or more reviewers, who look for faults in the item; a
recorder, who notes the faults; and the author, who helps
explain the item being inspected.
This formal, highly structured inspection process has been
extremely effective in finding and eliminating errors. It
can be applied to any product of the software development
process, including documents, design, and code. One of its
important side benefits has been the direct feedback to the
developer/author, and the significant improvement in quality
that results.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment