Tuesday, September 11, 2007

IEEE 829 Documentation

IEEE 829 Documentation

Over the years a number of types of document have been invented to allow for the control of testing. They apply to software testing of all kinds from component testing through to release testing. Every organisation develops these documents themselves and gives them different names, and in some cases confuses their purpose. To provide a common set of standardised documents the IEEE developed the 829 Standard for Software Test Documentation for any type of software testing, including User Acceptance Testing.

This White Paper outlines each of the types of document in this standard and describes how they work together.

The Types of Document

There are eight document types in the IEEE 829 standard, which can be used in three distinct phases of software testing:

1. Preparation Of Tests

· Test Plan: Plan how the testing will proceed.

· Test Design Specification: Decide what needs to be tested.

· Test Case Specification: Create the tests to be run.

· Test Procedure: Describe how the tests are run.

· Test Item Transmittal Report: Specify the items released for testing.

2. Running The Tests

· Test Log: Record the details of tests in time order.

· Test Incident Report: Record details of events that need to be investigated.

3. Completion of Testing

· Test Summary Report: Summarise and evaluate tests.

Documentation For Preparation Of Tests

The preparation for testing is the most important part of any software testing project and easily accounts for most of the paper work. The purpose this stage is to prepare an effective and efficient set of tests, and create the environment for them to run in.

IEEE 829 - Test Plan

The Test Plan is the pivotal document around which all the software testing projects revolve. It describes:

· what has to be done,

· to what quality standard,

· with what resource,

· to what time scale,

· and outlines the risks and how they would be overcome.

More details about this document are in the Test Plan overview article.

IEEE 829 - Test Design Specification

Creating the test design is the first stage in developing the tests for a software testing project. It records what needs to be tested, and is derived from the documents that come into the testing stage, such as requirements and designs. It records which features of a test item are to be tested, and how a successful test of these features would be recognized.

As an example lets use a Billing project from which the following testing requirements may be defined:

· A normal bill can be produced.

· A final bill can be produced.

· The volume discount is properly calculated.

The test design does not record the values to be entered for a test, but describes the requirements for defining those values.

This document is very valuable, but is often missing on many projects. The reason is that people start writing test cases before they have decided what they are going to test.

IEEE 829 - Test Case Specification

The test cases are produced when the test design is completed. Test cases specify for each testing requirement:

· The exact input values that will be input and the values of any standing data that is required,

· The exact output values and changes of value of the internal system state that are expected,

· And any special steps for setting up the tests.

Defining the expected values is very important, for only by doing this can discrepancies be spotted. However in some projects they are not defined which results in a very poor quality set of test cases.

A feature from the Test Design may be tested in more than one Test Case, and a Test Case may test more than one feature. The aim is for a set of test cases to test each feature from the Test Design at least once. Taking the Billing project example all three requirements could be tested using two test cases:

· The first test case could test both that a normal bill is produced and that a volume discount is properly calculated.

· A second test case could check that a final bill is produced and a volume discount is calculated.

IEEE 829 - Test Procedure Specification

The Test Procedures are developed from both the Test Design and the Test Case Specification. The document describes how the tester will physically run the test, the physical set-up required, and the procedure steps that need to be followed. The standard defines ten procedure steps that may be applied when running a test.

IEEE 829 - Test Item Transmittal Report

This curiously named document is not derived from the Test Plan but is the handover document from the previous stage of development. In User Acceptance Testing this may be the completion of System Testing.

It describes the items being delivered for testing, where to find them, what is new about them, and gives approval for their release. The importance of the document is to provide to the testers a warranty that the items are fit to be tested and gives a clear mandate to start testing. Do not start testing without receiving one!

Documentation For Running The Tests

When the tests have been developed then they can be run. The schedule of what Test Cases are run and when, is defined in the Test Plan. The test results are recorded in the Test Log, and in Test Incident Reports.

IEEE 829 - Test Log

The Test Log records the details of what Test Cases have been run, the order of their running, and the results of the test. The results are either the test passed, meaning that the actual and expected results were identical, or it failed and that there was a discrepancy. If there is a discrepancy than one or more Test Incident Reports are raised or updated, and their identities recorded on the Test Log.

The Test Log is important as it allows progress of the testing to be checked, as well as providing valuable information for finding out what caused an incident. If an incident is a coding fault, the fault may have occurred not in the Test Case that failed but in one that was run previously. Thus the sequence of the tests enables the fault to be found.

IEEE 829 -Test Incident Report

This document is deliberately named as an incident report, and not a fault report. The reason is that a discrepancy between expected and actual results can occur for a number of reasons other than a fault in the system. These include the expected results being wrong, the test being run wrongly, or inconsistency in the requirements meaning that more than one interpretation could be made.

The report consists of all details of the incident such as actual and expected results, when it failed, and any supporting evidence that will help in its resolution. The report will also include, if possible, an assessment of the impact upon testing of an incident.

The relationship between the Test Log and the Test Incident Report is not one to one. A failed test may raise more than one incident, and at the same time an incident may occur in more than one test failure. Taking the Billing project example, if both test cases completely failed than three Test Incident Reports would be raised:

· The first would be for failure to produce a normal bill,

· The second would be for failure to produce a final bill,

· The third for failure to calculate the volume discount for both the normal and the final bill.

It is important to separate incidents by the features being tested so as to get a good idea of the quality of the system, and allow progress in fixing faults to be checked.

A useful derivative document from the Test Incident Report is a Test Incident Log to summarise the incidents and the status. This is not an IEEE 829 document as all it values can be derived from the Test Incident Reports.

Documentation For Completion of Testing

Eventually testing will be completed according the criteria specified in the Test Plan. This is when the success or failure of the system is decided based on the results. The Test Summary records this information.

IEEE 829 - Test Summary

The Test Summary brings together all pertinent information about the testing, including an assessment about how well the testing has been done, the number of incidents raised and outstanding, and crucially an assessment about the quality of the system. Also recorded for use in future project planning is details of what was done, and how long it took.

This document is important in deciding whether the quality of the system is good

enough to allow it to proceed to another stage.

Use of the Standard

The standard is generic to cover all types of testing. As a result it allows the documents to be tailored to each situation. This means using the basic structure as given, but other documents can be added to it, sections can be added to each document, and further descriptions can be written. In addition some content can be referenced in another document. By using the standard means that anybody joining a project will know what documents are being used, and for what purpose,

allowing them to become productive faster.

Test Plan Overview

Creating test plans is no different from creating plans for any other part of a computing project. A plan enables you to decide in advance:

· How a project's objectives will be met,

· With the resources available,

· To the time scales required,

· To the quality desired,

· While controlling the risk.

A test plan is a specific version of a project plan with clauses that meet these requirements. The main characteristics of a project plan are described in the article Basics of Project Management. The international standard IEEE Std 829-1998 gives advice on the various types of test documentation required for testing including test plans, and details of these are in the IEEE 829 article. The test plan section of the standard defines 16 clauses. This article gives guidance on how the IEEE 829 standard maps against the requirements of a project test plan.

Project Plans and IEEE 829

The 16 clauses of the IEEE 829 test plan standard are:
1. Test plan identifier.
2. Introduction.
3. Test items.
4. Features to be tested.
5. Features not to be tested.
6. Approach.
7. Item pass/fail criteria.
8. Suspension criteria and resumption requirements.
9. Test deliverables.
10. Testing tasks.
11. Environmental needs.
12. Responsibilities.
13. Staffing and training needs.
14. Schedule.
15. Risks and contingencies.
16. Approvals.
These can be matched against the five characteristics of a basic plan, with a couple left over that form part of the plan document itself.

Scope

Scope clauses define what features will be tested. An aid to doing this is to prioritise them using a technique such as MoSCoW.
3. Test Items: The items of software, hardware, and combinations of these that will be tested.
4. Features to Be Tested: The parts of the software specification to be tested.
5. Features Not to Be Tested: The parts of the software specification to be excluded from testing.

Resource

Resource clauses give the overall view of the resources to deliver the tasks.
11. Environmental Needs: What is needed in the way of testing software, hardware, offices etc.
12. Responsibilities: Who has responsibility for delivering the various parts of the plan.
13. Staffing And Training Needs: The people and skills needed to deliver the plan.

Time

Time clauses specify what tasks are to be undertaken to meet the quality objectives, and when they will occur.
10. Testing Tasks: The tasks themselves, their dependencies, the elapsed time they will take, and the resource required.
14. Schedule: When the tasks will take place.
Often these two clauses refer to an appendix or another document that contains the detail.

Quality

Quality clauses define the standard required from the testing activities.
2. Introduction: A high level view of the testing standard required, including what type of testing it is.
6. Approach: The details of how the testing process will be followed.
7. Item Pass/Fail Criteria: Defines the pass and failure criteria for an item being tested.
9. Test Deliverables: Which test documents and other deliverables will be produced.
The associated article on test documentation gives details of the IEEE 829 documentation.

Risk

Risk clauses define in advance what could go wrong with a plan and the measures that will be taken to deal with these problems. An outline of risk management is in an associated article.
8. Suspension Criteria And Resumption Requirements: This is a particular risk clause to define under what circumstances testing would stop and restart.
15. Risks And Contingencies: This defines all other risk events, their likelihood, impact and counter measures to over come them.

Plan Clauses

These clauses are parts of the plan structure.
1. Test Plan Identifier: This is a unique name or code by which the plan can be identified in the project's documentation including its version.
16. Approvals: The signatures of the various stakeholders in the plan, to show they agree in advance with what it says.

Summary

The IEEE 829 standard for a test plan provides a good basic structure. It is not restrictive in that it can be adapted in the following ways:

· Descriptions of each clause can be tailored to an organisation's needs,

· More clauses can be added,

· More content added to any clause,

· Sub-sections can be defined in a clause,

· Other planning documents can be referred to.

If a properly balanced test plan is created then a project stands a chance of delivering a system that will meet the user's needs.

Website References

IEEE 829 can be ordered from the IEEE.

What is a Requirement?

A good set of requirements is needed for any project, especially computer system projects, to be successful. This is where many projects fail, in that they do not specify correctly what the system should do. In fact many systems have just been given a deadline for delivery, a budget to spend, and a vague notion of what it should do.

The root of this problem is:

· Computer systems developers rarely have as good an idea of how a business runs and should run, compared with a business user,

· Business users have little idea of what a computer system could achieve for them.

As a result paralysis sets in and business management time is concentrated on meeting timescales and budgets, rather than what is going to be delivered.

Requirements Definition

The truth is that you do not need a great deal of technical knowledge to specify requirements; in fact it can be a big disadvantage. A requirement for a computer system specifies what you want or desire from a system. For a business in particular this is, "What you want or desire from a system, which you believe will deliver you a business advantage".

This advantage need not just be a reduction in costs, in fact many systems justified on a reduction in operating costs, fail to deliver as low skilled but relatively cheap staff, have to be replaced by high skilled, and more expensive staff. The advantage can be a reduction in time to process something, which will lead to a reduction in costs, or being able to better use the unique knowledge base belonging to a business.

As you start to specify what you want or desire, you hit up against technical language of requirements. Fear not, this is quite straightforward:

· Functional requirements - are what you want a system to do.

· Non-functional requirements - are restrictions on the types of solutions that will meet the functional requirements.

· Design objectives - Are the guides to use in selecting a solution.

Functional Requirements

These are the type of behaviour you want the system to perform. If you were buying vehicles for a business your functional requirement might be:

· The vehicle should be able to take a load from a warehouse to a shop.

Similarly for a computer system you define what the system is to do. For example:

· The system should store all details of a customers order.

The important point to note is that WHAT is wanted is specified, and not HOW it will be delivered.

Non-Functional Requirements

These often lead to much mystical mumblings, implying that a high priest of the computing fraternity is the only person who can understand them. They are however quite simple; they are the restrictions or constraints to be placed on the system and how to build it. Their purpose is to restrict the number of solutions that will meet a set of requirements. Non-functional requirements can be split into two types: performance and development.

Performance Constraints

These constraints are how the system should perform when it is delivered. The vehicle example, without any constraints, might result in solutions being offered for everything from a large truck to a sports car. To restrict the types of solutions you might include these performance constraints:

· It must take a load of at least one ton.

· The load area must be covered.

· The load area must have a height of at least 10 feet.

You may include more. Similarly for a computer system you might specify values for these generic types of performance constraints:

· The response time for information to appear to a user.

· The number of hours a system should be available.

· The number of records a system should be able to hold.

· The capacity for growth of the system.

· The length of time a record should be held for auditing purposes.

For the customer records example these might be:

· Information should be made available and be stored in a maximum of 3 seconds.

· The system should be available from 9am to 5 pm Monday to Friday.

· The system should be able to hold a 100,000 customer records initially.

· The system should be able to add 10,000 records a year for 10 years.

· A record should be fully available on the system for at least 7 years.

The important point with these is that they restrict the number of solution options that are offered to you by the developer.

Development Constraints

In addition to the performance constraints you may include some development constraints. These mainly fall in the field of project management, but are still a restriction on the types of solution that can be offered. There are three general types of development constraint:

· Time - When a system should be delivered is the obvious time constraint.

· Resource - How much money is available to develop the system is obvious, but a key resource would be the amount of time business staff could spend in briefing system development staff.

· Quality - Any standards which are used to develop the system including project management, development methods etc.

Design Objectives

Design objectives assist in selecting a solution from a number that are offered to you. Only you know what is the most important feature of a new system, whether it should be fast, have large storage, be easy to use, or whatever. Unfortunately you cant have all you want; compromises have to be made.

Experiments with teams of developers in the 1970's showed that they will deliver a system according to what is defined as the top design objective. A number of teams were given an identical set of functional requirements, but each had a different design objective: some had to make the system fast, some small to use only a small amount of computer storage, some easy to use, etc. Each team delivered a system that met their top objective fully, and other objectives to a lesser degree.

If you do not produce a set of design objectives, which are in a priority order, the developers will produce their own, and these might not be what you want. For the customer records example the top design objective could be that it easy for users to find customer information.

Bad Types Of Requirements

The above are all good types of requirements and will allow a development team to provide you with a number of options from which you can select a suitable solution. However many sets of requirements given to developers are polluted with design and implementation solutions. This means that the customer has told the developer how to conduct their business!

Examples of design solutions are:

· The system should run on our existing network of computers.

· The structure of a customer record must have a separate field for the first, and last names of a customer.

Examples of implementation solutions are:

· The customer record systems should run on a SQL database.

· The system should be built using the Java programming language.

Each of these says HOW the system should be built, not WHAT the design should deliver, and you may miss out on a better solution, due to you making these design decisions.

There may be good reasons for some of these statements, but until you have seen a number of designs, you do not know if they are valid for you.

Prioritising Requirements

When a set of requirements has been produced it is often large and complex. The realities of times scale and resource mean that it won't all be delivered, at least not the first time out. The customer should prioritise the requirements to specify what they most want, and what is nice to have. A method of doing this is the MoSCoW technique. If the customer does not prioritise then it will be done by the developers, who may select the parts of the process which are easiest to produce, or that are technically challenging, but not taking into account the needs of the organisation.

Conclusion

A good set of requirements consists of prioritised sets of:

· Functional requirements,

· Non-functional requirements, and

· Design objectives.

And does NOT have any design or implementation decisions.

Producing these will enable you to get systems that will deliver a business advantage.

No comments: