Monday, January 14, 2008

testing types

Static Testing

The Verification activities fall into the category of Static Testing. During static testing, you have a checklist to check whether the work you are doing is going as per the set standards of the organization. These standards can be for Coding, Integrating and Deployment. Review's, Inspection's and Walkthrough's are static testing methodologies.

Dynamic Testing

Dynamic Testing involves working with the software, giving input values and checking if the output is as expected. These are the Validation activities. Unit Tests, Integration Tests, System Tests and Acceptance Tests are few of the Dynamic Testing methodologies. As we go further, let us understand the various Test Life Cycle's and get to know the Testing Terminologies. To understand more of software testing, various methodologies, tools and techniques, you can download the Software Testing Guide Book from here.

Difference between Static and Dynamic Testing: Please refer the definition of Static Testing to observe the difference between the static testing and dynamic testing.

Black Box Testing
Black box testing attempts to derive sets of inputs that will fully exercise all the functional requirements of a system. It is not an alternative to white box testing. This type of testing attempts to find errors in the following categories:

1. Incorrect or missing functions,
2. Interface errors,
3. Errors in data structures or external database access,
4. Performance errors, and 5. Initialization and termination errors.

Tests are designed to answer the following questions:

1. How is the function's validity tested?
2. What classes of input will make good test cases?
3. Is the system particularly sensitive to certain input values?
4. How are the boundaries of a data class isolated?
5. What data rates and data volume can the system tolerate?
6. What effect will specific combinations of data have on system operation?

White box testing should be performed early in the testing process, while black box testing tends to be applied during later stages. Test cases should be derived which

1. Reduce the number of additional test cases that must be designed to achieve reasonable testing, and
2. Tell us something about the presence or absence of classes of errors, rather than an error associated only with the specific test at hand.

Equivalence Partitioning
This method divides the input domain of a program into classes of data from which test cases can be derived. Equivalence partitioning strives to define a test case that uncovers classes of errors and thereby reduces the number of test cases needed. It is based on an evaluation of equivalence classes for an input condition. An equivalence class represents a set of valid or invalid states for input conditions.

Equivalence classes may be defined according to the following guidelines:
1. If an input condition specifies a range, one valid and two invalid equivalence classes are defined.
2. If an input condition requires a specific value, then one valid and two invalid equivalence classes are defined.
3. If an input condition specifies a member of a set, then one valid and one invalid equivalence class are defined.
4. If an input condition is Boolean, then one valid and one invalid equivalence class are defined.

Boundary Value Analysis

This method leads to a selection of test cases that exercise boundary values. It complements equivalence partitioning since it selects test cases at the edges of a class. Rather than focusing on input conditions solely, BVA derives test cases from the output domain also. BVA guidelines include:

1. For input ranges bounded by a and b, test cases should include values a and b and just above and just below a and b respectively.
2. If an input condition specifies a number of values, test cases should be developed to exercise the minimum and maximum numbers and values just above and below these limits.
3. Apply guidelines 1 and 2 to the output.
4. If internal data structures have prescribed boundaries, a test case should be designed to exercise the data structure at its boundary.

Cause-Effect Graphing Techniques

Cause-effect graphing is a technique that provides a concise representation of logical conditions and corresponding actions. There are four steps:

1. Causes (input conditions) and effects (actions) are listed for a module and an identifier is assigned to each.
2. A cause-effect graph is developed.
3. The graph is converted to a decision table.
4. Decision table rules are converted to test cases.

What is blackbox testing, difference between blackbox testing and whitebox testing, Blackbox Testing plans, unbiased blackbox testing

White box testing
White box testing is a test case design method that uses the control structure of the procedural design to derive test cases. Test cases can be derived that

1. Guarantee that all independent paths within a module have been exercised at least once,
2. Exercise all logical decisions on their true and false sides,
3. Execute all loops at their boundaries and within their operational bounds, and
4. Exercise internal data structures to ensure their validity.

The Nature of Software Defects

Logic errors and incorrect assumptions are inversely proportional to the probability that a program path will be executed. General processing tends to be well understood while special case processing tends to be prone to errors.

We often believe that a logical path is not likely to be executed when it may be executed on a regular basis. Our unconscious assumptions about control flow and data lead to design errors that can only be detected by path testing.

Typographical errors are random.
Basis Path Testing
This method enables the designer to derive a logical complexity measure of a procedural design and use it as a guide for defining a basis set of execution paths. Test cases that exercise the basis set are guaranteed to execute every statement in the program at least once during testing.

Flow Graphs
Flow graphs can be used to represent control flow in a program and can help in the derivation of the basis set. Each flow graph node represents one or more procedural statements. The edges between nodes represent flow of control. An edge must terminate at a node, even if the node does not represent any useful procedural statements. A region in a flow graph is an area bounded by edges and nodes. Each node that contains a condition is called a predicate node. Cyclomatic complexity is a metric that provides a quantitative measure of the logical complexity of a program. It defines the number of independent paths in the basis set and thus provides an upper bound for the number of tests that must be performed.

The Basis Set
An independent path is any path through a program that introduces at least one new set of processing statements (must move along at least one new edge in the path). The basis set is not unique. Any number of different basis sets can be derived for a given procedural design. Cyclomatic complexity, V(G), for a flow graph G is equal to

1. The number of regions in the flow graph.
2. V(G) = E - N + 2 where E is the number of edges and N is the number of nodes.
3. V(G) = P + 1 where P is the number of predicate nodes.

Deriving Test Cases
1. From the design or source code, derive a flow graph.
2. Determine the cyclomatic complexity of this flow graph.
Even without a flow graph, V(G) can be determined by counting
the number of conditional statements in the code.
3. Determine a basis set of linearly independent paths.
Predicate nodes are useful for determining the necessary paths.
4. Prepare test cases that will force execution of each path in the basis set.
Each test case is executed and compared to the expected results.

Automating Basis Set Derivation
The derivation of the flow graph and the set of basis paths is amenable to automation. A software tool to do this can be developed using a data structure called a graph matrix. A graph matrix is a square matrix whose size is equivalent to the number of nodes in the flow graph. Each row and column correspond to a particular node and the matrix corresponds to the connections (edges) between nodes. By adding a link weight to each matrix entry, more information about the control flow can be captured. In its simplest form, the link weight is 1 if an edge exists and 0 if it does not. But other types of link weights can be represented:

� the probability that an edge will be executed,
� the processing time expended during link traversal,
� the memory required during link traversal, or
� the resources required during link traversal.

Graph theory algorithms can be applied to these graph matrices to help in the analysis necessary to produce the basis set.




Loop Testing
This white box technique focuses exclusively on the validity of loop constructs. Four different classes of loops can be defined:

1. simple loops,
2. nested loops,
3. concatenated loops, and
4. unstructured loops.

Simple Loops
The following tests should be applied to simple loops where n is the maximum number of allowable passes through the loop:

1. skip the loop entirely,
2. only pass once through the loop,
3. m passes through the loop where m < n,
4. n - 1, n, n + 1 passes through the loop.

Nested Loops
The testing of nested loops cannot simply extend the technique of simple loops since this would result in a geometrically increasing number of test cases. One approach for nested loops:

1. Start at the innermost loop. Set all other loops to minimum values.
2. Conduct simple loop tests for the innermost loop while holding the outer loops at their minimums. Add tests for out-of-range or excluded values.
3. Work outward, conducting tests for the next loop while keeping all other outer loops at minimums and other nested loops to typical values.
4. Continue until all loops have been tested.

Concatenated Loops

Concatenated loops can be tested as simple loops if each loop is independent of the others. If they are not independent (e.g. the loop counter for one is the loop counter for the other), then the nested approach can be used.

Unstructured Loops

This type of loop should be redesigned not tested!!!
Other White Box Techniques
Other white box testing techniques include:

1. Condition testing
exercises the logical conditions in a program.
2. Data flow testing
selects test paths according to the locations of definitions and uses of variables in the program.

Unit Testing

In computer programming, a unit test is a method of testing the correctness of a particular module of source code.

The idea is to write test cases for every non-trivial function or method in the module so that each test case is separate from the others if possible. This type of testing is mostly done by the developers.

Benefits

The goal of unit testing is to isolate each part of the program and show that the individual parts are correct. It provides a written contract that the piece must satisfy. This isolated testing provides four main benefits:

Encourages change

Unit testing allows the programmer to refactor code at a later date, and make sure the module still works correctly (regression testing). This provides the benefit of encouraging programmers to make changes to the code since it is easy for the programmer to check if the piece is still working properly.

Simplifies Integration

Unit testing helps eliminate uncertainty in the pieces themselves and can be used in a bottom-up testing style approach. By testing the parts of a program first and then testing the sum of its parts will make integration testing easier.

Documents the code

Unit testing provides a sort of "living document" for the class being tested. Clients looking to learn how to use the class can look at the unit tests to determine how to use the class to fit their needs.

Separation of Interface from Implementation

Because some classes may have references to other classes, testing a class can frequently spill over into testing another class. A common example of this is classes that depend on a database; in order to test the class, the tester finds herself writing code that interacts with the database. This is a mistake, because a unit test should never go outside of its own class boundary. As a result, the software developer abstracts an interface around the database connection, and then implements that interface with their own Mock Object. This results in loosely coupled code, thus minimizing dependencies in the system.

Limitations

It is important to realize that unit-testing will not catch every error in the program. By definition, it only tests the functionality of the units themselves. Therefore, it will not catch integration errors, performance problems and any other system-wide issues. In addition, it may not be trivial to anticipate all special cases of input the program unit under study may receive in reality. Unit testing is only effective if it is used in conjunction with other software testing activities.

Unit Testing - Software Unit Testing, Tools, Research Topics, Toolkits, Extreme Programming Unit Testing

Requirements Testing

Usage:


To ensure that system performs correctly

To ensure that correctness can be sustained for a considerable period of time.

System can be tested for correctness through all phases of SDLC but incase of reliability the programs should be in place to make system operational.

Objective:

Successfully implementation of user requirements,/li>
Correctness maintained over considerable period of time Processing of the application complies with the organization’s policies and procedures.
Secondary users needs are fulfilled:
Security officer
DBA
Internal auditors
Record retention
Comptroller
How to Use

Test conditions created
These test conditions are generalized ones, which becomes test cases as the SDLC progresses until system is fully operational.
Test conditions are more effective when created from user’s requirements.
Test conditions if created from documents then if there are any error in the documents those will get incorporated in Test conditions and testing would not be able to find those errors.
Test conditions if created from other sources (other than documents) error trapping is effective.
Functional Checklist created.
When to Use


Every application should be Requirement tested
Should start at Requirements phase and should progress till operations and maintenance phase.
The method used to carry requirement testing and the extent of it is important.
Example


Creating test matrix to prove that system requirements as documented are the requirements desired by the user.
Creating checklist to verify that application complies to the organizational policies and procedures.

Regression Testing

Usage:


All aspects of system remain functional after testing.

Change in one segment does not change the functionality of other segment.

Objective:


Determine System documents remain current
Determine System test data and test conditions remain current
Determine Previously tested system functions properly without getting effected though changes are made in some other segment of application system.
How to Use


Test cases, which were used previously for the already tested segment is, re-run to ensure that the results of the segment tested currently and the results of same segment tested earlier are same.
Test automation is needed to carry out the test transactions (test condition execution) else the process is very time consuming and tedious.
In this case of testing cost/benefit should be carefully evaluated else the efforts spend on testing would be more and payback would be minimum.
When to Use


When there is high risk that the new changes may effect the unchanged areas of application system.
In development process: Regression testing should be carried out after the pre-determined changes are incorporated in the application system.
In Maintenance phase : regression testing should be carried out if there is a high risk that loss may occur when the changes are made to the system
Example


Re-running of previously conducted tests to ensure that the unchanged portion of system functions properly.
Reviewing previously prepared system documents (manuals) to ensure that they do not get effected after changes are made to the application system.
Disadvantage

Time consuming and tedious if test automation not done

Error Handling Testing

Usage:

It determines the ability of applications system to process the incorrect transactions properly

Errors encompass all unexpected conditions.

In some system approx. 50% of programming effort will be devoted to handling error condition.

Objective:


Determine Application system recognizes all expected error conditions
Determine Accountability of processing errors has been assigned and procedures provide a high probability that errors will be properly corrected
Determine During correction process reasonable control is maintained over errors.
How to Use


A group of knowledgeable people is required to anticipate what can go wrong in the application system.
It is needed that all the application knowledgeable people assemble to integrate their knowledge of user area, auditing and error tracking.
Then logical test error conditions should be created based on this assimilated information.
When to Use


Throughout SDLC.
Impact from errors should be identified and should be corrected to reduce the errors to acceptable level.
Used to assist in error management process of system development and maintenance.
Example

Create a set of erroneous transactions and enter them into the application system then find out whether the system is able to identify the problems..
Using iterative testing enters transactions and trap errors. Correct them. Then enter transactions with errors, which were not present in the system earlier.

Manual Support Testing

Usage:


It involves testing of all the functions performed by the people while preparing the data and using these data from automated system.

Objective:


Verify manual support documents and procedures are correct.
Determine Manual support responsibility is correct
Determine Manual support people are adequately trained.
Determine Manual support and automated segment are properly interfaced.
How to Use


Process evaluated in all segments of SDLC.
Execution of the can be done in conjunction with normal system testing.
Instead of preparing, execution and entering actual test transactions the clerical and supervisory personnel can use the results of processing from application system.
To test people it requires testing the interface between the people and application system.
When to Use


Verification that manual systems function properly should be conducted throughout the SDLC.
Should not be done at later stages of SDLC.
Best done at installation stage so that the clerical people do not get used to the actual system just before system goes to production.
Example


Provide input personnel with the type of information they would normally receive from their customers and then have them transcribe that information and enter it in the computer.
Users can be provided a series of test conditions and then asked to respond to those conditions. Conducted in this manner, manual support testing is like an examination in which the users are asked to obtain the answer from the procedures and manuals available to them.

Intersystem Testing

Usage:


To ensure interconnection between application functions correctly.

Objective:


Determine Proper parameters and data are correctly passed between the applications
Documentation for involved system is correct and accurate.
Ensure Proper timing and coordination of functions exists between the application system.
How to Use


Operations of multiple systems are tested.
Multiple systems are run from one another to check that they are acceptable and processed properly.
When to Use


When there is change in parameters in application system
The parameters, which are erroneous then risk associated to such parameters, would decide the extent of testing and type of testing.
Intersystem parameters would be checked / verified after the change or new application is placed in the production.
Example


Develop test transaction set in one application and passing to another system to verify the processing.
Entering test transactions in live production environment and then using integrated test facility to check the processing from one system to another.
Verifying new changes of the parameters in the system, which are being tested, are corrected in the document.
Disadvantage

Time consuming and tedious if test automation not done
Cost may be expensive if system is run several times iteratively.

Control Testing

Usage:


Control is a management tool to ensure that processing is performed in accordance to what management desire or intents of management.

Objective:

Accurate and complete data
Authorized transactions
Maintenance of adequate audit trail of information.
Efficient, effective and economical process.
Process meeting the needs of the user.
How to Use

To test controls risks must be identified.
Testers should have negative approach i.e. should determine or anticipate what can go wrong in the application system.
Develop risk matrix, which identifies the risks, controls; segment within application system in which control resides.
When to Use
Should be tested with other system tests.
Example

file reconciliation procedures work
Manual controls in place.

Parallel Testing

Usage:


To ensure that the processing of new application (new version) is consistent with respect to the processing of previous application version.

Objective:


Conducting redundant processing to ensure that the new version or application performs correctly.
Demonstrating consistency and inconsistency between 2 versions of the application.
How to Use


Same input data should be run through 2 versions of same application system.
Parallel testing can be done with whole system or part of system (segment).
When to Use


When there is uncertainty regarding correctness of processing of new application where the new and old version are similar.
In financial applications like banking where there are many similar applications the processing can be verified for old and new version through parallel testing
Example


Operating new and old version of a payroll system to determine that the paychecks from both systems are reconcilable.
Running old version of application to ensure that the functions of old system are working fine with respect to the problems encountered in the new system.

Volume Testing

Whichever title you choose (for us volume test) here we are talking about realistically exercising an application in order to measure the service delivered to users at different levels of usage. We are particularly interested in its behavior when the maximum number of users are concurrently active and when the database contains the greatest data volume.

The creation of a volume test environment requires considerable effort. It is essential that the correct level of complexity exists in terms of the data within the database and the range of transactions and data used by the scripted users, if the tests are to reliably reflect the to be production environment. Once the test environment is built it must be fully utilised. Volume tests offer much more than simple service delivery measurement. The exercise should seek to answer the following questions:

What service level can be guaranteed. How can it be specified and monitored?

Are changes in user behaviour likely? What impact will such changes have on resource consumption and service delivery?

Which transactions/processes are resource hungry in relation to their tasks?

What are the resource bottlenecks? Can they be addressed?

How much spare capacity is there?

The purpose of volume testing is to find weaknesses in the system with respect to its handling of large amount of data during extended time periods

Stress Testing

The purpose of stress testing is to find defects of the system capacity of handling large numbers of transactions during peak periods. For example, a script might require users to login and proceed with their daily activities while, at the same time, requiring that a series of workstations emulating a large number of other systems are running recorded scripts that add, update, or delete from the database.

Performance Testing

System performance is generally assessed in terms of response time and throughput rates under differing processing and configuration conditions. To attack the performance problems, there are several questions should be asked first:


How much application logic should be remotely executed?
How much updating should be done to the database server over the network from the client workstation?
How much data should be sent to each in each transaction?

According to Hamilton [10], the performance problems are most often the result of the client or server being configured inappropriately.

The best strategy for improving client-sever performance is a three-step process [11]. First, execute controlled performance tests that collect the data about volume, stress, and loading tests. Second, analyze the collected data. Third, examine and tune the database queries and, if necessary, provide temporary data storage on the client while the application is executing

No comments: