Monday, January 14, 2008

Test Case Authoring

Test Case Authoring

1.0






Table of Contents


1.0 General 1
Module Objectives 1
Module Structure: 1
2.0 Test Cases 2
Attributes of a good test case 2
Most Common mistakes while writing Test Cases 3
3.0 Authoring test cases using functional specifications 4
4.0 Authoring Test Cases Using Use Cases 5
Advantages of Test Cases derived from Use Cases 5
5.0 Test case Management and Test Case authoring tools 7
6.0 Test Case Authoring Tools 8
Mercury Interactive’s Test Director 8
Features & benefits of Test Director 8
Applabs Test Link 9
7.0 Test Case Coverage 11
8.0 Unit Summary 12
Exercise 12

1.0 General
Test case authoring is one of the complex and time-consuming activities for any test engineer.
The progress of the project depends on the quality of the test cases written. The test engineers need to take utmost care while developing the test cases and must ensure that they follow the standard rules of test cases authoring, so that they are easy to understand and implement.
The following module aims at providing an insight into the fundamentals of test case authoring and the techniques to be adopted while authoring such as writing test cases based up on functional specifications or using use cases.
1.1 Module Objectives
At the end of the this module, you should be able to:
 Define test cases
 Understand the process of developing/authoring cases
 Understand test case Management using some tools.
 Understand Testcase Coverage.
1.2 Module Structure:

S.no Topic Duration in Hrs
1 Over View 1
2 Test cases with Functional Specifications 2
3 Test cases with Use Cases 2
4 Test Case Authoring Tools 2
5 Test Case Coverage 1
Total Duration 8







2.0 Test Cases
Definition: A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly. A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results.
2.1 Attributes of a good test case

Accurate. They test what their descriptions say they will test.
It should always be clear whether the tester is doing something or the system is doing it. If a tester reads, "The button is pressed," does that mean he or she should press the button, or does it mean to verify that the system displays it as already pressed? One of the fastest ways to confuse a tester is to mix up actions and results. To avoid confusion, actions should always be entered under the ‘Steps’ column, and results under the ‘Results’/’Expected Results’ column. What the tester does is always an action. What the system displays or does is always a result.
Economical.
Test Cases should have only the steps or fields needed for their purpose. They should not give a guided tour of the software.
How long should a test case be? Generally, a good length for step-by-step cases is 10-15 steps. There are several benefits to keeping tests this short:
It takes less time to test each step in short cases
The tester is less likely to get lost, make mistakes, or need assistance.
The test manager can accurately estimate how long it will take to test
Results are easier to track

We should not try to cheat on the standard of 10-15 steps by cramming a lot of action into one step. A step should be one clear input or tester task. You can always tag a simple finisher onto the same step such as click or press . Also a step can include a set of logically related inputs. You don't have to have a result for each step if the system doesn't respond to the step.
2.1.1.1 Repeatable, self-standing
A test case is a controlled experiment. It should get the same results every time no matter who tests it. If only the writer can test it and get the result, or if the test gets different results for different testers, it needs more work in the setup or actions.
2.1.1.2 Appropriate
A test case has to be appropriate for the testers and environment. If it is theoretically sound but requires skills that none of the testers have, it will sit on the shelf. Even if you know who is testing the first time, you need to consider down the road -- maintenance and regression.





2.1.1.3 Traceable
You have to know what requirement the case is testing. It may meet all the other standards, but if its result, pass or fail, doesn't matter, why bother?
The above list is comprehensive but not exhaustive. Based on individual requirements, the more standards may be added to the above list.
2.2 Most Common mistakes while writing Test Cases
In each writer's work, test case defects will veer around certain writing mistakes. If you are writing cases or managing writers, don't wait until cases are all done before finding these mistakes. Review the cases every day or two, looking for the faults that will make the cases harder to test and maintain. Chances are you will discover that the opportunities to improve are clustered in one of the seven most common test case mistakes:

1. Making cases too long
2. Incomplete, incorrect, or incoherent setup
3. Leaving out a step
4. Naming fields that changed or no longer exist
5. Unclear whether tester or system does action
6. Unclear what is a pass or fail result
7. Failure to clean up








3.0 Authoring test cases using functional specifications
This means writing test cases for an application with the intent to uncover nonconformance with functional specifications. This type of testing activity is central to most software test efforts as it tests whether an application is functioning in accordance with its specified requirements. Additionally, some of the test cases may be written for testing the nonfunctional aspects of the application, such as performance, security, and usability.
The importance of having testable, complete, and detailed requirements cannot be overemphasized. In practice, however, having a perfect set of requirements at the tester's disposal is a rarity. In order to create effective functional test cases, the tester must understand the details and intricacies of the application. When these details and intricacies are inadequately documented in the requirements, the tester must conduct an analysis of them.
Even when detailed requirements are available, the flow and dependency of one requirement to the other is often not immediately apparent. The tester must therefore explore the system in order to gain a sufficient understanding of its behavior to create the most effective test cases.
Effective test design includes test cases that rarely overlap, but instead provide effective coverage with minimal duplication of effort (although duplication sometimes cannot be entirely avoided in assuring complete testing coverage). Apart from avoiding duplication of work, the test team should review the test plan and design in order to:
• Identify any patterns of similar actions or events used by several transactions. Given this information, test cases should be developed in a modular fashion so that they can be reused and recombined to execute various functional paths, avoiding duplication of test-creation efforts.
• Determine the order or sequence in which specific transactions must be tested to accommodate preconditions necessary to execute a test procedure, such as database configuration, or other requirements that result from control or work flow.
• Create a test procedure relationship matrix that incorporates the flow of the test procedures based on preconditions and post conditions necessary to execute a test case. A test-case relationship diagram that shows the interactions of the various test procedures, such as the high-level test procedure relationship diagram created during test design, can improve the testing effort.
Another consideration for effectively creating test cases is to determine and review critical and high-risk requirements by testing the most important functions early in the development schedule. It can be a waste of time to invest efforts in creating test procedures that verify functionality rarely executed by the user, while failing to create test procedures for functions that pose high risk or are executed most often.
3.1.1.1 To sum up
Effective test-case design requires understanding of system variations, flows, and scenarios. It is often difficult to wade through page after page of requirements documents in order to understand connections, flows, and interrelationships. Analytical thinking and attention to detail are required to understand the cause-and-effect connections within the system intricacies. It is insufficient to design and develop high-level test cases that execute the system only at a high level; it is important to also design test procedures at the detailed, gray-box level.



4.0 Authoring Test Cases Using Use Cases
A use case is a sequence of actions performed by a system, which combined together produce, a result of value to a system user. While use cases are often associated with object-oriented systems, they apply equally well to most other types of systems.
Use cases and test cases work well together in two ways: If the use cases for a system are complete, accurate, and clear, the process of deriving the test cases is straightforward. And if the use cases are not in good shape, the attempt to derive test cases will help to debug the use cases.
4.1 Advantages of Test Cases derived from Use Cases
Traditional test case design techniques include analyzing the functional specifications, the software paths, and the boundary values. These techniques are all valid, but use case testing offers a new perspective and identifies test cases which the other techniques have difficulty seeing.
Use cases describe the “process flows” through a system based on its actual likely use, so the test cases derived from use cases are most useful in uncovering defects in the process flows during real-world use of the system (that moment you realize “we can’t get there from here!”). They also help uncover integration bugs, caused by the interaction and interference of different features, which individual feature testing would not see. The use case method supplements (but does not supplant) the traditional test case design techniques.
4.1.1.1 What one should know before converting the Use cases into test cases?
 Business logic and terminologies of the vertical.
 Technical complexities and environment compatibilities of the application.
 Limitations of the application and its design.
 Software testing experience.
4.1.1.2 How to approach in deriving test cases from Use cases?
 Read and understand the objective of the use case.
 Identify the conditions involved in the use case.
 Identify the relations between the conditions within a use case.
 Identify the dependencies of a use case with another.
 Check the functional flow.
 If you suspect an issue in any manner, get it resolved from your client or designing team.
 Break down the positive and negative test scenarios from each condition.
 Collect test data for the identified scenarios.
 Prepare a high-level test case index with unique test id.
 If you have the prototype of the application, compare the test scenarios with the prototype and review the test case index.
 Convert the test scenarios into test cases.
The first step is to develop the Use Case topics from the functional requirements of the Software Requirement Specification. The Use Case topics are depicted as an oval with the Use Case name. See . The Diagram also identifies the Actors outside the system, and which participant initiates the action.




Figure 1: Use Case Diagram

The Use Case diagram just provides a quick overview of the relationship of actors to Use Cases. The meat of the Use Case is the text description. This text will contain the following:

Name
Brief Description
SRS Requirements Supported
Pre & Post Conditions
Event Flow
In the first iteration of Use Case definition, the topic, a brief description and actors for each case are identified and consolidated. In the second iteration the Event Flow of each Use Case can be flushed out. The Event Flow may be the personification and role-playing of the requirements specification. The requirements in the Software Requirement Specification are each uniquely numbered so that they may be accounted for in the verification testing. These requirements should be mapped to the Use Case that satisfies them for accountability.

The Pre-Condition specifies the required state of the system prior to the start of the Use Case. This can be used for a similar purpose in the Test Case. The Post-Condition is the state of the system after the actor interaction. This may be used for test pass/fail criteria.

The event flow is a description (usually a list) of the steps of the actor’s interaction with the system and the system’s required response. Recall that system is viewed as a black box. The event flow contains exceptions, which may cause alternate paths through the event flow. The following is an example of a Use Case for telephone systems.
5.0 Test case Management and Test Case authoring tools
Once the Test cases are developed they need to be maintained in the proper way to avoid confusion as which test cases are executed and which are not and which have been passed or failed. That is the status of the test cases needs to be maintained.
So, the most important activity to protect the value of test cases is to maintain them so that they are testable. They should be maintained after each testing cycle, since testers will find defects in the cases as well as in the software.
Test cases lost or corrupted by poor versioning and storage defeat the whole purpose of making them reusable. Configuration management (CM) of cases should be handled by the organization or project, rather than the test management. If the organization does not have this level of process maturity, the test manager or test writer needs to supply it. Either the project or the test manager should protect valuable test case assets with the following configuration management standards:
 Naming and numbering conventions
 Formats, file types
 Versioning
 Test objects needed by the case, such as databases
 Read only storage
 Controlled access
 Off-site backup
6.0 Test Case Authoring Tools
Improving productivity with test management software:
Software designed to support test authoring is the single greatest productivity booster for writing test cases. It has these advantages over word processing, database, or spreadsheet software:
 Makes writing and outlining easier
 Facilitates cloning of cases and steps
 Easy to add, move, delete cases and steps
 Automatically numbers and renumbers
 Prints tests in easy-to-follow templates
Test authoring is usually included in off-the-shelf test management software, or it could be custom written. Test management software usually contains more features than just test authoring. When you factor them into the purchase, they offer a lot of power for the price. If you are shopping for test management software, it should have all the usability advantages listed just above, plus additional functions:
 Exports tests to common formats
 Multi-user
 Tracks test writing progress, testing progress
 Tracks test results, or ports to database or defect tracker
 Links to requirements and/or creates coverage matrixes
 Builds test sets from cases
 Allows flexible security
There are many test case authoring tools available in the market today. Here we will limit the discussion to Mercury’s Test Director and the in-house tool TestLinks.
6.1 Mercury Interactive’s Test Director
By far the most familiar tool for maintaining Test Cases is Mercury Interactive’s Test Director. Test Director helps organizations deploy high-quality applications more quickly and effectively. It has four modules—Requirements Manager, Test Plan, Test Lab and Defects Manager. These allow for a smooth information flow between various testing stages. The completely web-enabled Test Director supports high levels of communication and collaboration among distributed testing teams.
6.1.1 Features & benefits of Test Director
Supports the Entire Testing Process Test Director incorporates all aspects of the testing process— requirements management, planning, scheduling, running tests, issue management and project status analysis—into a single browser-based application.
6.1.1.1 Provides Anytime, Anywhere Access to Testing Assets
Using Test Director’s Web interface, testers, developers and business analysts can participate in and contribute to the testing process by collaborating across geographic and organizational boundaries.
6.1.1.2 Provides Traceability Throughout the Testing Process
Test Director links requirements to test cases, and test cases to issues, to ensure traceability throughout the testing cycle. When requirement changes or the defect is fixed, the tester is notified of the change.
6.1.1.3 Integrates with Third-Party Applications
Whether you're using an industry standard configuration management solution, Microsoft Office, or a homegrown defect management tool, any application can be integrated into Test Director. Through its open API, Test Director preserves your investment in existing solutions and enables you to create an end-to-end lifecycle-management solution.
6.1.1.4 Manages Manual and Automated Tests
Test Director stores and runs both manual and automated tests, and can help jumpstart your automation project by converting manual tests to automated test scripts.
6.1.1.5 Accelerates Testing Cycles
Test Director’s Test Lab Manager accelerates the test execution cycles by scheduling and running tests automatically—unattended, even overnight. The results are reported into Test Director’s central repository, creating an accurate audit trail for analysis.
6.1.1.6 Facilitates Consistent and Repetitive Testing Process
By providing a central repository for all testing assets, Test Director facilitates the adoption of a more consistent testing process, which can be repeated throughout the application lifecycle or shared across multiple applications or lines of business (LOBs).
6.1.1.7 Provides Analysis and Decision Support Tools
Test Director’s integrated graphs and reports help analyze application readiness at any point in the testing process. Using information about requirements coverage, planning progress, run schedules or defect statistics, QA managers are able to make informed decisions on whether the application is ready to go live.
6.2 Applabs Test Link
Though not having as many features as Test Director, the TestLinks tool developed by Applabs is a simple and effective tool in maintaining the Test plans and test cases. It is developed using PHP and MySQL.

The various options provided in this tool are
Product Management – Here the user can Create, Edit, Delete products.

Test Case Management – Here the user can Create, Edit, Delete Test Cases. He has an option to search Test cases as well as a print option to have a hard copy of the test cases developed.

Test Plan Management – Here the user plans the test effectively with the following options

Creating, Editing, Deleting Test plans
Linking test cases to test plan (Import (smartlink) into a Test plan)
Defining User/Project rights
Deleting Test cases

Keyword Management - Here the user can Create, Edit, Delete Key words. And can assign single keyword to multiple cases.

Execution status - Here the status of the user can have various reports based on the test cases status (i.e. which have been passed or failed or blocked) or on the basis of build etc. In this section the user can Create, Edit, Delete Milestones and also can set the risk, importance, and owner of each category.

Test Case Execution – This option allows the user to execute test cases by either components or their category levels and also allows the user to create the new build. Print option is provided to have the hard copy of Test plan and test cases.

User Administration – Under this section the user can create new login to the tool or can modify the existing user details.
7.0 Test Case Coverage
Test coverage is about insuring that test plans and test cases include information vital for successful testing of the program in the areas of functionality, performance, and the overall quality of the software. In addition, test managers who prepare test plans that provide proper test coverage can avoid the wrath of a project manager whose implementation has just gone sour or an angry customer whose system has just crashed.

(Note that test coverage is not the same thing as code coverage. Code coverage measures how the tests have exercised the code, e.g., which lines of code have never been executed. But you can exercise every line of code and still miss something important—like figuring out that the program doesn’t work at all on Windows 2000.)

Test coverage requires information—about how the program installs, how fast the program accesses and processes data, and how the program appears on the monitor. These are just a few examples of the kinds of things a tester needs to know about a program’s functionality and performance in order to provide appropriate test coverage. Here we talk about gathering and using that information.

However, gathering information about a program just for the sake of collecting information does not improve your test coverage. Adequate test coverage involves a systematic approach that includes analyzing the available documentation for use in test planning, execution, and review. In order to come up with a successful strategy to improve test coverage, you’ll need to do three things:

1. Create a plan of attack to provide strong test coverage
2. Determine the scenarios for the test plan
3. Manage the changes made to information used by testing

Implementing this strategy requires that you, the test manager, think about people as much as about documents—taking into account the interests of the rest of the project team, and using people skills to encourage cooperation between teams. To this end, part of the manager’s role is to serve as “knowledge manager”—demonstrating how to share and store the team’s knowledge, and then using that knowledge to improve the organization’s methods.

8.0 Unit Summary
In this session we have learnt:
1. The process of authoring Test cases.
2. Authoring Test cases using functional Specifications and Use cases.
3. Test case management and Test case authoring tools.

8.1 Exercise

Answer the following in short.
1. What is “test case authoring” and what are the attributes of a good test case?
2. How do you derive test cases from use cases?
3. Write a test case to test the login screen of a mail system?
4. Write a test case for testing a “user details” interface that accepts the users personal details (you can make some assumptions and limitations)?

2 comments:

cathyouellette said...

i am glad to discover this page : i have to thank you for the time i spent on this especially great reading !! i really liked each part and also bookmarked you for new information on your site.
QA Companies
Automation Testing Companies
Mobile App Testing Companies
Performance Testing Companies
Security Testing Companies

Sophia Miz said...

So far out of all the blogs, I personally feel this blog is just awesome Excellent goods from you about Game Testing Services, man. I’ve understand your stuff previous to and you’re just too excellent on Game Testing Companies. I actually like what you’ve acquired here, certainly like what you are stating and the way in which you say it. You make it enjoyable and you still take care of to keep it sensible content about Game Testing Services USA. I can not wait to read far more Video Game Testing Companies from you. This is actually a tremendous site..