Monday, January 14, 2008

Software Test Execution

Software Test Execution

Table of Contents


1.0 General 4
Module Objectives 4
Module Structure 4
2.0 Software Test Execution 5
Execute tests and Record Results 5
Report Test Results 14
Testing Software Installation 18
Acceptance Test 21
Test Software Changes 23
Testing in a Multiplatform Environment 27
Testing Specialized Systems and Applications 30
Testing Web-based Systems 30
Testing Off-the-Shelf Software 37
Testing Client / Server Systems 45
Evaluate Test Effectiveness 47
Building Test Documentation 52
3.0 Unit Summary 55
Exercise 55
























1.0 General
Test execution forms the core-component of any testing project. Extreme care needs to be taken in performing test execution activity as per the test strategy and the test planning done in the previous phases. Test execution involves the work of actually executing what has been planned.
1.1 Module Objectives
At the end of this session you should be able to:
Understand different approaches to test execution.
Perform Test execution, record and report test results..
Create test documentation.
.
1.2 Module Structure
S.no Topic
1 Test execution & report test results 2
2 Approaches to execution 6
Total Duration 8


2.0 Software Test Execution
2.1 Execute tests and Record Results

Concerns:
Three major concerns testers have on entering the test execution step:

1. Software not in a testable mode
2. Inadequate time/resources
3. Significant problems will not be uncovered during testing

Tasks:

The execution involves performing the following three tasks:
Build Test Data

Experience shows that it is uneconomical to test all conditions in an application system. Experience further shows that most testing exercises fewer than one-half of the computer instructions. Therefore, optimizing testing through selecting the most important test transactions is the key aspect of the test data test tool.

Test File Design

To be effective, a test file should use transactions having a wide range of valid and invalid input data – valid data for testing normal processing operations and invalid data for testing programmed controls.

General types of conditions that should be tested are as follows.

Tests of normally occurring transactions
To test a computer system’s ability to accurately process valid data, a test file should include transactions that normally occur.

Tests using invalid data
Testing for the existence or effectiveness of programmed controls requires using invalid data.



Tests to violate established edit checks
From system documentation, the auditor should be able to determine what edit routines are included in the computer programs to be tested. He or she should then create test transactions to violate these edits to see whether they, in fact, exist.

Entering Test Data
After the types of test transactions have been determined, the test data should be put into correct entry form. If the test team wishes to test controls over both input and computer processing, they should feed the data into the system on basic-source documents for the organization to convert into machine – readable form.

Analyzing Processing Results
Before processing test data through the computer, the test team must predetermine the correct result for each test transaction for comparison with actual results.

Applying Test Files against Programs that Update Master Record
There are two basic approaches to test programs for updating master records. In one approach, copies of actual master records and/or simulated master records are used to set up a separate master file for the test. In the second approach, special audit records, kept in the organization’s current master file, are used.

Test File Process
The recommended nine-step process for the creation and use of test data is as follows:

 Identify test resources
 Identify test conditions
 Rank test conditions
 Select conditions for testing
 Determine correct results of processing
 Create test transactions
 Document test conditions
 Conduct test
 Verify and Correct


Volume Test Tool
Volume testing is a tool that supplements test data. The objective is to verify that the system can perform properly when internal program or system limitations have been exceeded. This may require that large volumes of transactions be entered during testing.

The types of internal limitations that can be evaluated with volume testing include:

 Internal accumulation of information, such as tables
 Number of line items in an event, such as the number of items that can be included within an order
 Size of accumulation fields
 Data-related limitations, such as leap year, decade change, switching calendar years, and so on
 Field size limitations, such as number of characters allocated for people’s names
 Number of accounting entities, such as number of business locations, state/country in which business is performed, and so on.

The concept of volume testing is as old as the processing of data in information services. What is necessary to make the concept work is a systematic method of identifying limitations. The recommended steps for determining program/system limitations follow.

 Identify input data used by the program
 Identify data created by the program
 Challenge each data element for potential limitations
 Document limitations
 Perform volume testing


Creating Test Scripts
The following five tasks are needed to develop, use, and maintain test scripts.

Determine Testing Levels
There are five levels of testing for scripts, as follows.

Unit Scripting. Develop a script to test a specific unit/module.
Pseudo concurrency scripting. Develop scripts to test when there are two or more users accessing the same file at the same time.
Integration scripting. Determine that various modules can be properly linked.
Regression scripting. Determine that the unchanged portions of systems remain unchanged when the system is changed. (Note: This is usually performed with the information captured on capture/playback software systems.)
Stress/performance scripting. Determine whether the system will perform correctly when it is stressed to its capacity. This validates the performance of the software when stressed by large numbers of transactions. The testers need to determine which (or all) of these five levels of scripting to include in the script.

Develop Script
This task is also normally done using the capture/playback tool. The script is a complete series of related terminal actions. The development of a script involves a number of considerations, as follows:
 Script components
 Terminal input
 Programs to be tested
 Files involved
 On-line operating environment
 Terminal output
 Manual entry of script transactions
 Date setup
 Secured initialization
 File restores
 Password entry
 Update
 Automated entry of script transactions
 Edits of transactions
 Navigation of transactions through the system
 Inquiry during processing
 External considerations
 Program libraries
 File states/contents
 Screen initialization
 Operating environment
 Security considerations
 Complete scripts
 Start and stop considerations
 Start; usually begins with a clear screen
 Start; begins with a transaction code
 Scripts; end with a clear screen
 Script contents
 Sign-on
 Setup
 Menu navigation
 Function
 Exit
 Sign-off
 Clear screen
 Security considerations
 Changing passwords
 User identification/security rules
 Reprompting
 Single-terminal user identifications
 Sources of scripting transactions
 Terminal entry of scripts
 Operations initialization of files
 Application program interface (api) communications
 Special considerations
 Single versus multiple terminals
 Date and time dependencies
 Timing dependencies
 Inquiry versus update
 Unit versus regression test
 Organization of scripts (recommend by purpose)
 Unit test organization
o Single functions (transactions)
o Single terminal
o Separate inquiry from update
o Self-maintaining
 Pseudo concurrent test
o Single functions (transactions)
o Multiple terminals
o Three steps: setup (manual/script), test (script), and reset (manual/script)
 Integration test (string testing)
o Multiple functions (transactions)
o Single terminal
o Self-maintaining
 Regression test
o Multiple functions (transactions)
o Multiple terminals
o Three steps: setup (external), test (script), and reset (external)
 Stress/performance test
o Multiple functions (transactions)
o Multiple terminals (2 X rate)
o Iterative/vary arrival rate; three steps: setup (external), test (script), and collect performance data.

Execute Script
The script can be executed manually or by using the capture/playback tools.
Caution: Be reluctant to use scripting extensively unless a software tool drives the script. Some of the considerations to incorporate into script execution are:

 Environmental setup
 Program libraries
 File states/contents
 Date and time
 Security
 Multiple terminal arrival modes
 Serial (cross-terminal) dependencies
 Pseudo concurrent
 Processing options
 Stall detection
 Synchronization
 Rate
 Arrival rate
 Think time







Analyze Results
After executing the test script, the results must be analyzed. However, much of this should have been done during the execution of the script, using the operator instructions provided. Please note that if a capture/playback software tool is used, analysis will be more extensive after execution. The result analysis should include the following:

• System components
• Terminal outputs (screens)
• File content at conclusion of testing
• Environment activities, such as:
o Status of logs
o Performance data (stress results)
• On-screen outputs
• Order of outputs processing
• Compliance of screens to specifications
• Ability to process actions
• Ability to browse through data

Maintain Scripts
Once developed, scripts need to be maintained so that they can be used throughout development and maintenance. The areas to incorporate into the script maintenance procedure are:

• Programs
• Files
• Screens
o Insert (transactions)
o Delete
o Arrange
• Field
o Changed (length, content)
o New
o Moved
• Expand test cases


Several characteristics of scripting are different from batch test data development. Theses differences are:

 Data entry procedures required
 Use of software packages
 Sequencing of events
 Stop procedures



Execute Tests

There are many methods of testing an application system. The test team is concerned that all of these forms of testing occur so that the organization has the highest probability of successes when installing a new application system.
The test team should address the following types of tests during the test phase.

 Manual, Regression, and Functional Testing (Reliablity)
 Compliance Testing (Authorization)
 Functional Testing (File Integrity)
 Functional Testing (Audit Trial)
 Recovery Testing (Continuity of Testing)
 Stress Testing (Service Level)
 Compliance Testing (Security)
 Testing Complies with Methodology
 Functional Testing (Correctness)
 Manual Support Testing (Ease of use)
 Inspections (Maintainability)
 Disaster Testing (Portability)
 Functional and Regression Testing (Coupling)
 Compliance Testing (Performance)
 Operations Testing (Ease of Operations)


Record Test Result
A test problem is a condition that exists within the software system that needs to be addressed. Carefully and completely documenting a test problem is the first step in correcting the problem.

The following four attributes should be developed for all test problems:

1. Statement of condition. Tells what is.
2. Criteria. Tells what should be.
These two attributes are the basis for a finding. If a comparison between the two gives little or no practical consequence, no finding exists.

3. Effect. Tells why the difference between what is and what should be is significant.
4. Cause. Tells the reasons for the deviation. Identification of the cause is necessary as a basis for corrective action.

A well-developed problem statement will include each of these attributes. When one or more of these attributes is missing, questions almost always arise, such as:

Condition. What is the problem?
Criteria. Why is the current state inadequate?
Effect. How significant is it?
Cause. What could have caused the problem?

Documenting a statement of a user problem involves three subtasks, which are explained in the following paragraphs.

Document Deviation
The documenting of deviation is describing the conditions, as they currently exist, and the criteria, which represents what the user desires. The actual deviation will be the difference or gap between “what is” and “what is desired”.

The statement of condition should document as many of the following attributes as appropriate for the problem.

Activities involved. The specific business or administrated activities that are being performed.
Procedures used to perform work. The specific step-by-step activities that are utilized in producing output from the identified activities.
Outputs/deliverables. The products that are produced from the activity.
Inputs. The triggers, events, or documents that cause this activity to be executed.
Users/customers served. The organization, individuals, or class of users/customers serviced by this activity.
Deficiencies noted. The status of the results of executing this activity and any appropriate interpretation of those facts.

Document Effect – Efficiency, economy, and effectiveness are useful measures of effect and frequently can be stated in quantitative terms such as dollars, time, units of production, number of procedures and processes, or transactions.

Document Cause – The cause is the underlying reason for the condition.
Most findings involve one or more of the following causes:

• Nonconformity with standards, procedures, or guidelines
• Nonconformity with published instructions, directives, policies, or procedures from a higher authority
• Nonconformity with business practices generally accepted as sound
• Employment of inefficient or uneconomical practices.

The determination of the cause of a condition usually requires the scientific approach, which encompasses the following steps:

• Define the problem (the condition that results in the finding).
• Identify the flow of work and/or information leading to the condition.
• Identify the procedures used in producing the condition.
• Identify the people involved.
• Recreate the circumstances to identify the cause of a condition.
































2.2 Report Test Results

Concerns

The individuals responsible for assuring that software projects are accurate, complete, and meet users’ true needs have these concerns regarding the status of project.

• Test results will not be available when needed.
• Test information is inadequate.
• Test status is not delivered to the right people.

Input:

There are three types of input needed to answer management’s questions about the status of the software system. They are as follows:

• Test Plan(s) and Project Plan(s): Testers need both test plan and the project plan, both of which should be viewed as contracts. The project plan is the project’s contract with management for work to be performed; and the test plan is a contract indicating what the testers will do to determine whether the software is complete and correct. It is against these two plans that testers will report status.
• Expected Processing Results: Testers report status of actual results against expected results. To make these reports, the testers need to know what results are expected. For software systems the expected results are the business results.
• Data Collected During Testing: Four categories of data will be collected during testing. These are as follows:
o Test Results Data: This data will include, but not be limited to:
 Test factors
 Business Objectives
 Interface Objectives
 Functions/sub functions
 Units
 Platform
o Test Transactions, Test Suites, and Test Events: These are the test products produced by the test team to perform testing. They include, but are not limited to:
 Test transactions/events
 Inspections
 Reviews
o Defects: This category includes a description of the individual defects uncovered during testing. This description includes, but is not limited to:
 Data the defect uncovered
 Name of the defect
 Location of the defect
 Severity of the defect
 Type of defect
 How the defect was uncovered (i.e., test data/test script)
o Efficiency: Two types of efficiency can be evaluated during testing: software system and test.
o Storing Data Collected During Testing: It is recommended that a database be established in which to store the results collected during testing.
The most common test report is a simple spreadsheet, which indicates the project component for which status is requested, the test that will be performed to determine the status of that component, and the results of testing at any point in time.

The process of reporting is divided into three tasks as follows.

Tasks

Task 1: Report Software Status
There are two levels of project status reports:

1. Summary Status report. Provides a general view of all project components.
2. Project Status report. Shows detailed information about a specific project component, allowing the reader to see up-to-date information about schedules, budgets, and project resources.
Both reports are designed to present information clearly and quickly.

Prior to effectively implementing a project reporting process, two inputs must be in place.

1. Measurement units: Information services must have established reliable units of measure that can be validated.
2. Process requirements: Process requirements for a project reporting system must include functional, quality, and constraint attributes.
The six subtasks for this task are described in the following subsections.

1. Establish a Measurement Team
2. Inventory Existing Project Measures
3. Develop a Consistent set of Project Metrics
4. Define Process Requirements
5. Develop and Implement the process
6. Monitor the Process





Summary status report:

The summary status report provides general information about all projects.
This is divided into four sections.
• Report date information
• Project Information
• Time line information
• Legend Information

Project status report:

The Project status report provides information related to a specific project component. This is divided into six sections.

• Vital Project Information
• General Project Information
• Project Activities Information
• Essential Elements Information
• Legend Information
• Project highlights information






Task 2: Report Interim Test Results

The test process should produce a continuous series of reports that describe the status of testing. The test reports are for use by the testers, the test manager, and the software development team.

Nine interim reports are proposed here. Testers can use all nine or select specific ones to meet individual test needs.

1. Function/Test Matrix
2. Functional Testing Status Report
3. Functions Working Time Line
4. Expected versus Actual Defects Uncovered Time Line
5. Defects Uncovered versus Corrected Gap Time Line
6. Average Age of Uncorrected Defects by Type
7. Defect Distribution Report
8. Relative Defect Distribution Report
9. Testing Action Report


Task 3: Report Final Test Results

A final test report should be prepared at the conclusion of each test activity. This might include:

• Individual Project test report (e.g., a single software system)
• Integration Test report
• System Test report
• Acceptance Test report











2.3 Testing Software Installation

The process of installation testing is attempting to validate that:

• Proper programs are placed into the production status.
• Needed data is properly prepared and available.
• Operating and user instructions are prepared and used.

Input: The installation phase is the process of getting a new system operational. The process may involve any or all of the following areas:

• Changing old data to a new format.
• Creating new data
• Installing new and/or change programs
• Updating computer instructions
• Installing new user instructions.

Much of the test process will be evaluating and working with installation phase deliverables. The more common deliverables produced during the installation phase include:

• Installation plan
• Installation flowchart
• Installation program listings and documentations (assuming special installation programs are required).
• Test results from testing special installation programs
• Documents requesting movement of programs into the production library and removal of current programs from that library.
• New operator instructions
• New user instructions procedures
• Results of installation process


The process of installation is divided into tasks.

Task 1a: Test Installation of New Software

The following are the concerns of installation testing.

1. Accuracy and completeness of installation verified (reliability).
2. Data changes during installation prohibited (authorization)
3. Integrity of production files verified.
4. Installation audit trail recorded.
5. Integrity of previous system assured (continuity of processing).
6. Fail-safe installation plan implemented (service level).
7. Access controlled during installation (security).
8. Installation compiles with methodology.
9. Proper programs and dates placed into production.
10. Usability instructions disseminated.
11. Documentation complete (maintainability).
12. Documentation complete (portability).
13. Interface coordinated (coupling).
14. Integration performance monitored.
15. Operating procedures implemented.



Task 1b: Test Changed Version (of Software)

The specific objectives of installing the change are as follows:

• Put changed application systems into production.
• Assess the efficiency of changes.
• Monitor the correctness of the change.
• Keep systems library up to date.

Most common concerns during the installation of the change include the following:

• Will the change be installed on time?
• Is backup data compatible with the changed system?
• Are recovery procedures compatible with the changed system?
• Is the source/object library cluttered with obsolete program versions?
• Will errors in the change be detected?
• Will errors in the change be corrected?

Testing the installation of changes is divided into three tasks.

• Test the restart/recovery plan
• Verify the Correct change has been entered into production
• Verify unneeded versions have been deleted






Task 2: Monitor production

The following groups may monitor the output of a new program version:

• Application system control group.

• User personnel.
• Software maintenance personnel.
• Computer operations personnel.

Regardless of who monitors the output, the software maintenance analyst and user personnel should provide clues about what to look for. User and software maintenance personnel must attempt to identify the specific areas where they believe problems might occur.

The types of clues that could be provided to monitoring personnel include:

• Transactions to investigate.
• Customers.
• Reports.
• Tape files.
• Performance.


Task 3: Document problems

Individuals detecting problems when they monitor changes in application systems should formally document them. The formal documentation process can be made even more effective if the forms are controlled through a numbering sequence.
The individual monitoring the process should be asked both to document the problem and to assess the risk associated with that problem.









2.4 Acceptance Test

1. Define the Acceptance Criteria
In preparation for developing the acceptance criteria, the user should:

• Acquire full knowledge of the application for which the system is intended.
• Become fully acquainted with the application as it is currently implemented by the user’s organization.
• Understand the risks and benefits of the development methodology that is to be used in correcting the software system.
• Fully understand the consequences of adding new functions to enhance the system.

Acceptance requirements that a system must meet can be divided into these four categories:

• Functionality requirements, which relate to the business rules that the system must execute.
• Performance requirements, which relate to operational requirements such as time or resource constraints.
• Interface quality requirements, which relate to a connection to another component of processing (e.g., human/machine, machine/module).
• Overall software quality requirements are those that specify limits for factors or attributes such as reliability, testability, correctness, and usability.


2. Develop an Acceptance Plan

The first step to achieve software acceptance is the simultaneous development of a software acceptance plan, general project plans, and software requirements to ensure that user needs are represented correctly and completely. This simultaneous development will provide an overview of the acceptance activities, to ensure that resources for them are included in the project plans.
After the initial software acceptance plan has been prepared, reviewed, and approved, the acceptance manager is responsible for implementing the plan and for assuring that the plan’s objectives are met.

3. Execute the Acceptance Plan (Conduct Acceptance Tests and Reviews)

The objective of this step is to determine whether the acceptance criteria have been met in a delivered product. This can be accomplished through reviews, which involve looking at interim products and partially developed deliverables at various points throughout the developmental process.

a. Developing Test Cases (Use Cases) Based on How Software Will Be Used

Incomplete, incorrect, and missing test cases can cause incomplete and erroneous test results. Flawed test results causes rework, at minimum, and at worst, a flawed system to be developed. There is a need to ensure that all required test cases are identified so that all system functionality requirements are tested.

A use case is a description of how a user (or another system) uses the system being designed to perform a given task. A system is described by the sum of its use cases. Each instance or scenario of a use case will correspond to one test case.


The following are the subtasks followed.
i. Build System Boundary Diagram
ii. Define Use Cases
iii. Develop Test Cases

4. Reach an Acceptance Decision
Typical acceptance decisions include

1. Required changes are accepted before progressing to the next activity.
2. Some changes must be made and accepted before further development of that section of the product; other changes may be made and accepted at the next major review.
3. Progress may continue may continue and changes may be accepted at the next review.
4. No changes are required and progress may continue.






2.5 Test Software Changes

Information Technology management should be concerned about the implementation of the testing and training objectives.

The following five tasks should be performed to effectively test a changed version of software.

Task 1: Develop/Update the Test Plan

The test plan for software maintenance is a shorter, more directed version of a test plan used for a new application system. While new application testing can take many weeks or months, software maintenance testing often must be done within a single day or a few hours. Because of time constraints, many of the steps that might be performed individually in a new system are combined or condensed into a short time span. This increases the need for planning so that all aspects of the test can be executed within the allotted time.

• Elements to be tested (types of testing) are:
• Changed transactions
• Changed programs
• Operating procedures
• Control group procedures
• User procedures
• Intersystem connections
• Job control language
• Interface to systems software
• Execution of interface to software systems
• Security
• Backup/recovery procedures

Task 2: Develop/Update the Test Data

Data must be prepared for testing all the areas changed during a software maintenance process. For many applications, the existing test data will be sufficient to test the new change. However, in many situations new test data will need to be prepared.

It is important to test both what should be accomplished, as well as what can go wrong. Most tests do a good job of verifying that the specifications have been implemented properly. Where testing frequently is inadequate is in verifying the unanticipated conditions. Included in this category are:

• Transactions with erroneous data
• Unauthorized transactions
• Transactions entered too early
• Transactions entered too late
• Transactions that do not correspond with master data contained in the application
• Grossly erroneous transactions, such as transactions that do not belong to the application being tested
• Transactions with larger values in the fields than anticipated
There are three methods that can be used to develop/update test data as follows:

Method 1: Update existing test data

If test files have been created for a previous version they can be used for testing a change. However the test data will need to be updated to reflect the changes to the software.

Method 2: Create new test data

The creation of new test data for maintenance follows the same methods as creating test data for a new software system.

Method 3: Use production data for testing

Tests are performed using some or all of the production data for test purposes (date modified, of course), particularly when there are no function changes. Using production data for test purposes may result in the following impediments to effective testing:

• Missing test transactions
• Multiple tests of the same transaction
• Unknown test results
• Lack of ownership

Production Data Definition: The following categories of production data can be used in testing:

• Transaction files
• Business master files
• Master files of business data
• Error files
• Operations, communications, database, or accounting logs
• Manual logs

This production data can be used for test purposes. In some instances, it yields test transactions (e.g., a transaction file); in other cases, it provides information about performance results(e.g., an SMF log or job accounting log). To use production data as test data, testers first must determine the type of production data to use (e.g., a business transaction file). Then they can perform one or more of the following five steps to convert that production file to a test file.

• Select the First Bach of Records
• Protect Production Files from Modification
• Select a Random Sample of Transactions
• Browse through the Production File
• Do Parallel Testing


Task 3: Test the control change process

The following three subtasks are commonly used to control and record changes. If the staff performing the corrections do not have such a process, the testers can give them these subtasks and then request the work papers when complete. Testers should verify completeness using these three subtasks as a guide.

1. Identify and Control Change
An important aspect of changing a system is identifying which parts of the system will be impacted by that change. The impact may be in any part of the application system, both manual and computerized, as well as in the supporting software system. Regardless of whether impacted areas will require changes, at a minimum there should be an investigation into the extent of the impact.

2. Document Change Needed on Each Data Element
Whereas changes in processing normally impact only a single program or a small number of interrelated programs, changes to data may impact many applications. Thus, changes that impact data may have a more significant effect on the organization than those that impact processing.

3. Document Changes Needed in Each Program
The implementation of most changes will require some programming alterations. Even a change of data attributes will often necessitate program changes. Some of these will be minor in nature, while others may be extremely difficult and time-consuming to implement.

Task 4: Conduct Testing

Software change testing is normally conducted by both the user and software maintenance test team. The testing is designed to provide the user assurance that the change has been properly implemented.
An effective method for conducting software maintenance testing is to prepare a checklist providing both the administrative and technical data needed to conduct the test. This ensures that everything is ready at the time the test is to be conducted.

Task 5: Develop/Update the Training Material

Updating training material for users, and training users, is not an integral part of many software change processes. Therefore, this task description describes a process for updating training material and performing that training.
The training requirements are incorporated into existing training material. Therefore, it behooves the application project personnel to maintain an inventory of training material.

• Training Plan Work Paper
• Training Material Inventory Form
• Prepare Training Material
• Conduct Training


















2.6 Testing in a Multiplatform Environment

Overview:

Each platform on which the software is designed to execute operationally may have slightly different characteristics. These distinct characteristics include various operating systems, hardware configurations, operating instructions, and supporting software, such as database management systems. The objective of testing is to determine whether the software will produce the correct results on various platforms.

Objective:

The objective of this process is to validate that a single software package executed on different platforms will produce the same results. The test process is basically the same that was used in parallel testing.

Concerns:

There are three major concerns in multiplatform testing:

• The platforms in the test lab will not be representative of the platforms in the real world.
• The software will be expected to work on platforms not included in the test labs.
• The supporting software on various platforms is not comprehensive.

Workbench:

Most tasks assume that the platforms will be identified in detail, and that the software to run on the different platforms has been previously validated as being correct.

Input:

The two inputs needed for testing in a multiplatform environment are as follows:

• List of platforms on which software must execute.
• Software to be tested

Do Procedures:

The following tasks should be performed to validate that software performs consistently in a multiplatform environment.

Task 1: Define Platform Configuration Concerns
The first task is to develop a list of potential concerns about that environment. The testing that follows will then determine the validity of those concerns. The recommended process for identifying concerns is error guessing.
Error guessing requires two prerequisites.

1. The error – guessing group understands how the platform works.
2. The error – guessing group knows how the software functions.

Task 2: List Needed Platform Configurations
The test must identify the platforms that must be tested. Ideally this list of platforms and detailed description of the platforms would be input to the test process. The needed platforms are either those that will be advertised as acceptable for using the software, or platforms within an organization on which the software will be advertised as acceptable for using the software, or platforms within an organization on which the software will be executed. Testers must then determine whether those platforms are available for testing. If the exact platform is not available, the testers need to determine whether an existing platform is acceptable.

Task 3: Assess Test Room Configurations
The testers need to make a determination as to whether the platform available in the test room are acceptable for testing. This involves two steps:
1. Document the platform to be used for testing, if any is available, on the work paper.
2. Make a determination as to whether the available platform is acceptable for testing.

Task 4: List Structural Components Affected by the Platform(s)
Structural testing deals with the architecture of the system. Architecture describes how the system is put together. It is used in the same context that an architect designs a building. Some of the architectural problems that could affect computer processing include:

 Internal limits on number of events that can occur in a transaction
 Maximum size of fields
 Disk storage limitations
 Performance limitations

Task 5: List Interface – Platform Effects
Systems tend to fail at interface points, an interface being when control is passed from one processing component to another as, for example, when data is retrieved from a database, output reports are printed or transmitted, or a person interrupts processing to make a correction.

This is a two-part task. Part one is to identify the interfaces within the software systems. These interfaces should be readily identifiable in the user manual for the software. The second part is to determine whether those interfaces could be impacted by the specific platform on which the software executes.
At the conclusion of this task the tests that will be needed to validate multiplatform operations will have been determined.

Task 6: Execute the Tests
The platform test should be executed in the same manner as other tests are executed. The only difference may be that the same test would be performed on multiple platforms to determine that consistent processing occurs.

Check Procedures:
Prior to completing multiplatform testing a determination should be made that testing was performed correctly.

Output:
The output from this test process is a report indicating:

• Structural components that work or don’t work by platform
• Interfaces that work or don’t work by platform
• Multiplatform operational concerns that have been eliminated or substantiated
• Platforms on which the software should operate, but that have not been tested.

Guidelines:
Multiplatform testing is a costly, time-consuming, and extensive component of testing. The resources expended on multiplatform testing can be significantly reduced if that testing focuses on predefined multiplatform concerns. Identified structural components that might be impacted by multiple platforms should compromise most of the testing. This will focus the testing on what should be the major risks faced in operating a single software package on many different platforms.



2.7 Testing Specialized Systems and Applications

2.7.1 Testing Web-based Systems

Overview:

The client workstations are networked to a web server, either through a remote dial-in-connection or through a network such as a local area network (LAN) or wide area network. As the web server receives and processes requests from the client workstation, requests may be sent to the application server to perform actions such as data queries, electronic-commerce transactions, and so forth.

Objective:

The objective is to assess the adequacy of the web components of software applications. Web-based testing generally only needs to be done once for any applications using the web.

Concerns:

The concerns that the tester should have when conducting web-based testing are as follows:

• Browser compatibility
• Functional correctness
• Integration
• Usability
• Security
• Performance
• Verification of code
An additional concern is that web terminology will not be understood by the web based testers. The following are the common web terms.


• Browser
• Hyper Text Markup Language (HTML)
• Platform
• Java
• Web server
• Application server
• Back end
• Fire wall
• Uniform Resource Locator (URL)
• Electronic commerce (e-Commerce)
• Component
• Common Gateway Interface (CGI)
• Bandwidth
• Secure Socket Layer (SSL)
• File Transfer Protocol (FTP)

Input:

The input to this test process is the description of web-based technology used in the systems being tested.
The following list shows how web-based systems differ from other technologies.

• Uncontrolled user interfaces (Browsers)
• Complex Distributed systems
• Security issues
• Multiple layers in architecture
• New terminology and skill sets
• Object-oriented
• Nonstandardized

Do Procedures:

Testing of a web-based system involves performing the following four tasks.

Task 1: Select Web-Based Risks to Include in the Test Plan

Risks are important to understand because they reveal what to test. Each risk points to an entire area of potential tests. In addition, the degree of testing should be based on risk. The risks are briefly listed below followed by a more detailed description of the concerns associated with each risk.

• Security
• Performance
• Correctness
• Compatibility ( configuration )
• Reliability
• Data Integrity
• Usability
• Recoverability

Key areas of concern: Security Risk

The following are the security risks that need to be addressed in an Internet application test plan.

• External intrusion
• Protection of secured transactions
• Viruses
• Access control
• Authorization levels

Key areas of Concern: Performance

System performance can make or break an Internet application. There are several types of performance testing that can be done to validate the performance levels of an application.
Typically, the most common kind of performance testing for Internet applications is load testing. Load testing seeks to determine how the application performs under expected and greater-than-expected levels of activity. Application load can be assessed in a variety of ways:

• Concurrency
• Stress
• Throughput

Key areas of Concern: Correctness

One of the most important areas of concern is that the application functions correctly. This can include not only the functionality of buttons and “behind the scenes” instructions, but also calculations and navigation of the application.

Key areas of Concern: Compatibility

Compatibility is the ability of the application to perform correctly in a variety of expected environments. Two of the major variables that affect web-based applications are the operating systems and browsers.

Common operating systems include
 DOS/Windows
 Mac OS
 UNIX
 VMS
 Sun and SGI (Silicon Graphics Inc.)
 Linux

Popular browsers include

 Microsoft Internet Explorer
 Netscape Communicator
 Mosaic

There are many other lesser-known browsers. You can find information on all different types of browsers at www.browserwatch.com.

Browser Configuration:

Each browser has configuration options that affect how it displays information.
Some of the main things to consider from a hardware compatibility standpoint are:

• Monitors, video cards, and video RAM
• Audio, video, and multimedia support
• Memory (RAM) and hard drive space
• Bandwidth access

Browser differences can make a web application appear differently to different people. The differences may appear in any of the following areas.

• Print handling
• Reload
• Navigation
• Graphics filters
• Caching
• Dynamic page generation
• File downloads
• E-mail functions

Key areas of Concern: Reliability

Because of the continuous uptime requirements for most Internet applications, reliability is a key concern. However, reliability can be considered in more than system availability; it can also be expressed in terms of the reliability of the information obtained from the application:

• Consistently correct results
• Server and system availability

Key areas of Concern: Data Integrity

Not only must the data be validated when it is entered into the web application, but it must also be safeguarded to ensure the data stays correct.

Ensuring only correct data is accepted. This can be achieved by validating the data at the page level when it is entered by a user.

Ensuring data stays in a correct state: This can be achieved by procedures to back up data and ensure that controlled methods are used to update data.

Key areas of Concern: Usability

If users or customers find an Internet application hard to use, they will likely go to a competitor’s site. Usability can be validated and usually involves the following:

• Ensuring the application is easy to use and understand
• Ensuring that users know how to interpret and use the information delivered from the application
• Ensuring that navigation is clear and correct










Key areas of Concern: Recoverability

Internet applications are more prone to outages than systems that are more centralized or located on reliable, controlled networks. The remote accessibility concerns important:

• Lost connections
o Timeouts
o Dropped Lines
• Client system crashes
• Server system crashes or other application problems





Task 2: Select Web-Based Tests

Select the type of test based on the requirement and necessity from among the following.

• Unit or Component
• Integration
• System
• User Acceptance (Business Process Validation)
• Performance
• Load/Stress
• Regression
• Usability
• Compatibility

Task 3: Select Web-Based Test Tools

Effective web-based testing necessitates the use of web-based tools.

HTML Test Tools: Although many web development packages include an HTML checker, there are ways to perform a verification of HTML if you do not use/have such a feature. An example of a standalone tool is Doctor HTML by Imagineware (http://drhtml.imagiware.com/).

Site Validation: Site validation tools check your web applications to identify inconsistencies and errors such as:

• Moved pages
• Orphaned pages
• Broken links

An example of a site validation tool is SQA Site Check by Rational Software.

Java Test Tools: Java test tools are specifically designed for testing Java applications. Examples include:

• NuMega TrueTime Java Edition
• Sun Test Suite by Sun Microsystems
• Silk Test by Segue software
• SilkScope by Segue Software
• Silk Spec by Segue Software

Load / Stress Testing Tools: Load / Stress tools evaluate web-based systems when subjected to large volumes of data or transactions. Examples of tools that can simulate numerous virtual users and vary transaction rates include:

• Astra Site Test by Mecury Interactive
• Silk Performer by Seague Software

Test Case Generators: Test case generators create transactions for use in testing. This tool can tell you what to test, as well as create test cases that can be used in other test tools. An example of a test case generator is the Astra Quick Test by Mercury Interactive.
This tool captures business processes into a visual map to generate data-driven tests automatically. Test scripts can be imported to Mercury’s Load Runner and managed by Test Director.









2.7.2 Testing Off-the-Shelf Software

Overview:

Off-the-shelf software must be made to look attractive if it is to be sold. Thus, the developer of off-the-shelf software (OTSS) will emphasize the benefits of the software. Unfortunately, there is often a difference between what the user believes the software can accomplish and what it actually does accomplish.

Objective:
The objective of this off-the-shelf testing process is to provide the highest possible assurance of correct processing with a minimal effort. However, the process should be used for noncritical off-the-shelf software.

Concerns:
The user of off-the-shelf software should be concerned with these areas.
• Task/items missing.
• Software fails to perform.
• Extra features.
• Does not meet business needs.
• Does not meet operational needs.
• Does not meet people needs.


Input:
There are two inputs to this step. The first input is the manuals that accompany the OTSS. These normally include installation and operation manuals. The manuals describe what the software is designed to accomplish, and how to perform the tasks necessary to accomplish the software functions.
Do Procedures:
The execution of this process involves four tasks plus the check procedures. The process assumes that the individual(s) performing the test has knowledge of how the software will be used in the organization. If the tester does not know how it will be used, an additional step is required for the tester to identify the functionality that will be needed by the users of the software. The four tasks are described as follows:

Task 1: Test Business Fit
The objective of this task is to determine whether the software meets your needs. The task involves carefully defining your business needs and then verify whether the software in question will accomplish them.
The first step of this task is defining business functions in a manner that can be used to evaluate software capabilities. The second step of this task is matching software capabilities against business needs.
Step 1: Completeness of needs Specification
This test determines whether you have adequately defined your needs. Your needs should be defined in terms of the following two categories of outputs:

1. Output products/reports. Output products/reports are specific documents that you want produced by the computer system.
2. Management Decision Information. This category tries to define the information needed for decision-making purposes.


Testing the Completeness of Needs:
The first test to be performed for computer software is a test of completeness of needs. This has proved to be one of the major causes of problems in OTSS.

The objective of this first test is to help you determine how completely your needs are defined. The test is based on the criteria learned by the large corprations.

Step 2: Critical Success Factor Test:
This test tells whether the software package will be successful in meeting your business needs.
Critical Success Factors (CSF) are those criteria or factors that must be present in the acquired software for it to be successful.

Some of the common CSF’s for OTSS you may want to use are:

• Ease of use
• Expandability
• Maintainability
• Cost-effectiveness
• Transferability
• Reliability
• Security

Task 2: Test Operational Fit

The objective of this task is to determine whether the software will work in your business. Within your business there are several constraints that must be satisfied before you acquire the software, including:

• Computer hardware constraints
• Data preparation constraints
• Data entry constraints

Step 1: Compatibility with your hardware, operating system, and other software packages

This is not a complex test. It involves a simple matching between your processing capabilities and limitations, and what the vendor of the software says is necessary to run the software package. The most difficult part of this evaluation is ensuring the multiple software packages can properly interface.

Hardware compatibility. List the following characteristics for your computer hardware.
• Hardware vendor
• Amount of main storage
• Disk storage unit identifier
• Disk storage unit capacity
• Type of printer
• Number of print columns
• Type of terminal
• Maximum terminal display size
• Keyboard restrictions

Operating systems compatibility. For the operating system used by your computer hardware, list:
1. Name of operating system
2. Version of operating system in use


Program compatibility. List all of the programs with which you expect or would like this specific application to interact.

Data compatibility. In many cases, program compatibility will answer the questions on data compatibility. However if you created special files you may need descriptions of the individual data elements and files.

Step 2: Integrating the software into Your business system Work flow

Each computer makes certain assumptions. Unfortunately, these assumptions are rarely stated in the vendor literature.
The danger is that you may be required to do some manual processing functions that you may not want to do in order to utilize the software.

The objective of this test is to determine whether you can plug the OTSS into your existing manual system without disrupting your entire operation. Remember that:


• Your manual system is based on a certain set assumptions.
• Your manual system uses existing forms, existing data, and existing procedures.
• The computer system is based on a set of assumptions.
• The computer system uses a predetermined set of forms and procedures.
• Your current manual system and the new computer system may be incompatible.
• If they are incompatible, the computer system is not going to change – you will have to.
• You may not want to change – then what?


Performing the Data Flow Diagram Test

The data flow diagram is really more than a test. At the same time that it tests whether you can integrate the computer system into your business system, it shows you how to do it. It is both a system test and a system design methodology incorporated into a single process. So, to prepare the document flow narrative or document flow description, these three tasks must be performed:

• Prepare a document flow of your existing system.
• Add the computer responsibility to the data flow diagram.
• Modify the manual tasks as necessary.

The objective of this process is to illustrate the type and frequency of work flow changes that will be occurring. At the end of this test, you will need to decide whether you are pleased with the revised work flow. If you feel the changes can be effectively integrated into your work flow, the potential computer system has passed the test. If you feel the changes in the work flow will be disruptive, you may want to fail the software in this test and either look for other software or continue manual processing.

Step 3: Demonstrating the Software in Operation

This test analyzes the many facets of software. Software developers are always excited when their program goes to what they call “end of job”. This means that it executes and concludes without abnormally terminating.
Demonstration can be performed in either of the following ways:

• Computer store – controlled demonstration
• Computer site demonstration
These aspects of computer software should be observed during the demonstration:

• Understandability
• Clarity of communication
• Ease of use of instruction manual
• Functionality of the software
• Knowledge to execute
• Effectiveness of help routines
• Evaluate program compatibility
• Data compatibility
• Smell test


Task 3: Test people Fit

The objective of this task is to determine whether your employees can use the software. This testing consists of ensuring that your employees have or can be taught the necessary skills.

This test evaluates whether people possess the skills necessary to effectively use computers in their day-to-day work. The evaluation can be of current skills, or the program that will be put into place to teach individuals the necessary skills. Note that this includes the owner-president of the organization as well as the lowest – level employee in the organization.
The results of this test will show:

• The software can be used as is.
• Additional training/support is necessary.
• The software is not usable with the skill sets of the proposed users.


Task 4: Validate Acceptance Test Software Process

The objective of this task is to validate that the off-the-shelf software will in fact meet the structural and functional needs of the user of the software.

Step 1: Create Functional Test Conditions

It is important to understand the difference between correctness and reliability because it impacts both testing and operation. The types of test conditions that are needed to verify the functional accuracy and completeness of computer processing include:

• All transaction types to ensure they are properly processed
• Verification of all totals
• Assurance that all outputs are produced
• Assurance that all processing is complete
• Assurance that controls work
• Reports that are printed on the proper paper, and in the proper number of copies
• Correct field editing
• Logic paths in the system that direct the inputs to the appropriate processing routines
• Employees that can input properly
• Employees that understand the meaning and makeup of the computer outputs they generate

Step 2: Create Structuring Test Conditions

Structural, or reliability, test conditions are challenging to create and execute. Novices to the computer field should not expect to do extensive structural testing. They should limit their structural testing to conditions closely related to functional testing. How ever, structural testing is easier to perform as computer proficiency increases. This type of testing is quite valuable.
Some of the architectural problems that could affect computer processing include:

• Internal limits on number of events that can occur in a transaction (e.g., number of products that can be included on an invoice)
• Maximum size of fields (e.g., quantity is only two positions in length, making it impossible to enter an order for over 99 items)
• Disk storage limitations(e.g., you are only permitted to have X customers)
• Performance limitations (e.g., the time to process transactions jumps significantly when you enter over X transactions).

Check Procedures

At the conclusion of this testing process, the tester should verify that the OTSS test process has been conducted effectively.

Output

There are three potential outputs as a result of executing the OTSS test process.

• Fully acceptable
• Unacceptable
• Acceptable with conditions










2.7.3 Testing Client / Server Systems

Concerns:
The concerns about client/server systems reside in the area of control. The testers need to determine that adequate controls are in place to ensure accurate, complete, timely, and secure processing of client software systems. The testers must address these five concerns:

• Organizational readiness
• Client installation
• Security
• Client data
• Client/server standards

Input:
The input to this test process will be the client/server system. This will include the server technology and capabilities, the communication network, and the client workstations that will be incorporated into the test.

Do Procedures:
The testing for client/server software includes the following three tasks.

Task 1: Assess Readiness
Client / server programs should have sponsors. Ideally these are the directors of information technology and the impacted user management. It is the responsibility of sponsors to ensure that the organization is ready for the client/server technology. However, those charged with installing the new technology should provide the sponsor with a readiness assessment.
The following are the dimensions to the readiness assessment.
• Motivation
• Investment
• Client/Server skills
• User education
• Culture
• Client/Server support staff
• Client/Server aids/tools
• Software development process maturity







Task 2: Assess Key Components
Experience shows that if the key or driving components of technology are in place and working they will provide most of the assurance necessary for effective processing. Four key components are identified for client/server technology.
1. Client installations are done correctly.
2. Adequate security is provided for the client/server system.
3. Client data is adequately protected.
4. Client/Server standards are in place and working.

Task 3: Test the system: The testing of the client/server system should be performed taking into account the four key components listed above.
Output: The output from this system is the test report indicating what works and what does not work. The report should also contain recommendations by the test team for improvements where appropriate.




2.8 Evaluate Test Effectiveness

The major concern that testers should have is that their testing processes will not improve. Without improvement, testers will continue to make the same errors and perform testing inefficiently time after time.

Inputs for this should be the results of conducting software tests. The input should be an accumulation of test results over time. The type of information needed as input includes, but is not limited to:

• Number of tests conducted
• Resources expended in testing
• Test tools used
• Defects uncovered
• Size of software tested
• Days to correct defects
• Defects not corrected
• Defects uncovered during operation that were not uncovered during testing
• Developmental phase in which defects were uncovered
• Names of defects uncovered

Once a decision has been made to formally assess the effectiveness of testing, an assessment process is needed. A seven-task approach to assessing the effectiveness of systems testing is as follows:

Task 1: Establish Assessment Objectives

The objectives for performing the assessment should be clearly established. If objectives are not defined, the measurement process may not be properly directed and thus may not be effective. These objectives include:

 Identify test weaknesses
 Identify the need for new test tools
 Assess project testing
 Identify good test practices
 Identify poor test practices
 Identify economical test practices


Task 2: Identify what to measure

The categories of information needed to accomplish the measurement objectives should be identified. The following are the five characteristics of application system testing that can be measured.

• Involvement
• Extent of Testing
• Resources
• Effectiveness
• Assessment


Task 3: Assign Measurement Responsibility

One group should be made responsible for collecting and assessing testing performance information. Without a specific accountable individual, there will be no catalyst to ensure that the data collection and assessment process occurs. The responsibility for the use of information services resources resides with IT management. However, they may desire to delegate the responsibility to assess the effectiveness of the test process to a function within the department.


Task 4: Select Evaluation Approach

Several approaches can be used in performing the assessment process. The one that best matches the managerial style should be selected. The following are the most common approaches to evaluating the effectiveness of testing.

• Judgment
• Compliance with methodology
• Problems after test
• User reaction
• Testing metrics

The metrics approach is recommended because once established it is easy to use, and can be proven to show a high correlation to effective and ineffective practices. A major advantage to metrics is that the assessment process can be clearly defined, will be known to those people who are being assessed, and is specific enough so that it is easy to determine which testing variables need to be adjusted to improve the effectiveness, efficiency, and/or economy of the test process.

Task 5: Identify Needed Facts

The facts necessary to support the approach selected should be identified. The metrics approach clearly identifies the type of data needed for the assessment process. The needed information includes,

• Change characteristics
• Magnitude of system
• Cost of process being tested
• Cost of test
• Defects uncovered by testing
• Defects detected by phase
• Defects uncovered after test
• Cost of testing by function
• System complaints
• Quantification of defects
• Who conducted the test
• Quantification of correctness of defect


Task 6: Collect Evaluation Data

Once the data has been identified, a system must be established to collect and store the needed data in a form suitable for assessment. This may require a collection mechanism, a storage mechanism, and a method to select and summarize the information. Wherever possible, utility programs should be used for this purpose.

Task 7: Assess the Effectiveness of Testing

The raw information must be analyzed to draw conclusions about the effectiveness of systems testing. From this analysis, action can be taken by the appropriate party.

Use of Testing Metrics: Testing metrics are relationships that show a high positive correlation to that which is being measured. Metrics are used in almost all disciplines as a basis of performing an assessment of the effectiveness of some process. Some of the common assessments familiar to most people in other disciplines include:

• Blood pressure (medicine)
• Student aptitude test (education)
• Net profit (accounting)
• Accidents per worker-day (safety)

The following are the suggested metrics for evaluating application system testing.

1. User participation (user participation test time divided by total number of instructions).
2. Instructions exercised (number of instructions exercised versus total number of instructions).
3. Number of tests (number of tests versus size of system tested).
4. Paths tested (number of paths tested versus total number of paths).
5. Acceptance criteria tested (acceptance criteria verified versus total acceptance criteria)/
6. Test cost (test cost versus total system cost).
7. Cost to locate defect (cost of testing versus the number of defects located in testing).
8. Achieving budget (anticipated cost of testing versus the actual cost of testing).
9. Detected production errors (number of errors detected in production versus application system size).
10. Defects uncovered in testing (defects located by testing versus total system defects).
11. Effectiveness of test to business (loss due to problems versus total resources processed by the system).
12. Asset value of the test (test cost versus assets controlled by system).
13. Rerun analysis (rerun hours versus production hours).
14. Abnormal termination analysis (installed changes versus number of application system hang-ups).
15. Source code analysis (number of source code statements changed versus the number of tests).
16. Test efficiency (number of tests required versus the number of system errors).
17. Startup failure (number of program changes versus the number of failures the first time the changed program is run in production).
18. System complaints (system complaints versus number of transactions processed).
19. Test automation (cost of manual test effort versus total test cost).
20. Requirement phase testing effectiveness (requirements test cost versus number of errors detected during requirements phase).
21. Design phase testing effectiveness (design test cost versus number of errors detected during design phase).
22. Program phase testing effectiveness (program test cost versus number of errors detected during program phase).
23. Test phase testing effectiveness (test cost versus number of errors detected during test phase).
24. Installation phase testing effectiveness (installation test cost versus number of errors detected during installation phase).
25. Maintenance phase testing effectiveness (maintenance test cost versus number of errors detected during maintenance phase).
26. Defects uncovered in test (defects uncovered versus size of systems).
27. Untested change problems (number of tested changes versus problems attributable to those changes).
28. Tested change problems (number of tested changes versus problems attributable to those changes).
29. Loss value of test (loss due to problems versus total resources processed by the system).
30. Scale of ten (assessment of testing rated on a scale of ten).






















2.9 Building Test Documentation


The test documentation should be an integral part of the documentation of application systems. Information services documentation standards should specify the type and extent of test documentation to be prepared and maintained.

The uses for that documentation include:

• Verify correctness of requirements.
• Improve user understanding of information services.
• Improve user understanding of application systems.
• Justify test resources.
• Determine test risk.
• Create test transactions.
• Evaluate test results.
• Reset the system.
• Analyze the effectiveness of the test.

Types:
The two general categories of test documentation are:

Test Plan: The plan for the testing of the application system, including detailed specifications, descriptions, and procedures for all tests, and test data reduction and evaluation criteria.

Test analysis documentation: Documentation that covers the test analysis results and findings; presents the demonstrated capabilities and deficiencies for review; and provides a basis for preparing a statement of the application system readiness for implementation.

Test plan Documentation:
The test plan outlines the process to be followed in testing the application system. It includes the plan, the specifications for the test and how those tests will be evaluated, plus the description of the tests themselves.

The documentation is divided into different sections.

Section 1: General Information
 Summary
 Environment and pretest background
 References: References that are helpful in preparing for the test or conducting the test should be listed, such as:
o Project request (authorization)
o Previously published documents on the project (project deliverables)
o Documentation concerning related projects
o Testing policies, standards, and procedures
o Books and articles describing test processes, techniques, and tools.
Section 2: Plan

 Software Description
 Milestones
 Testing (Identify Location).
o Schedule
o Requirements
 Equipment
 Software
 Personnel
o Testing materials
• Documentation
• Software to be tested and its medium
• Test inputs and sample outputs
• Test control software and work papers
o Test Training
 Testing (Identify Location)


Section 3: Specifications and Evaluations

 Specifications
o Requirements
o Software functions
o Test/function relationships
o Test progression
 Methods and constraints
o Methodology
o Conditions
o Extent
o Data Recording
o Constraints
 Evaluation
o Criteria
o Data Reduction

Section 4: Test Descriptions

 Test (Identify)
o Control
o Inputs
o Outputs
o Procedures


Test Analysis Report Documentation

The test analysis report documents the results of the test. It serves the dual purposes of recording the results for analysis, and a means to report those analyses to involved parties.


The documentation is divided into different sections.


Section 1: General Information

• Summary
• Environment
• References
o Project Request
o Previously published
o Documentation concerning related projects
Section 2: Test Results and Findings

• Test (Identify)
o Dynamic Data Performance
o Static Data Performance

Section 3: Software Function Findings

• Function (Identify)
o Performance
o Limits

Section 4: Analysis Summary

• Capabilities
• Deficiencies
• Recommendations and Estimates
• Opinion




3.0 Unit Summary
In this session we have learnt
1. The process of test execution.
2. Different Approaches to test execution.
3. Test documentation.

3.1 Exercise
Answer the following in short
1. What process would you follow for installation testing
2. What is the process for testing web application and what are the areas of concern?
3. What is the process for testing off-the-self software ?
4. What is test effectiveness?
5. List the Uses of test documentation.

No comments: