Monday, September 24, 2007

Bug Life cycle:

Bug Life cycle:

In entomology(the study of real, living Bugs), the term life cycle refers to the various stages that an insect assumes over its life. If you think back to your high school biology class, you will remember that the life cycle stages for most insects are the egg, larvae, pupae and adult. It seems appropriate, given that software problems are also called bugs, that a similar life cycle system is used to identify their stages of life. Figure 18.2 shows an example of the simplest, and most optimal, software bug life cycle.


This example shows that when a bug is found by a Software Tester, its logged and assigned to a programmer to be fixed. This state is called open state. Once the programmer fixes the code, he assigns it back to the tester and the bugs enter the resolved state. The tester then performs a regression test to confirm that the bug is indeed fixed and, if it closes it out. The bug then enters its final state, the closed state.

In some situations though, the life cycle gets a bit more complicated.


In this case the life cycle starts out the same with the Tester opening the bug and assigning to the programmer, but the programmer doesn’t fix it. He doesn’t think its bad enough to fix and assigns it to the project manager to decide. The Project Manager agrees with the Programmer and places the Bug in the resolved state as a “wont-fix” bug. The tester disagrees, looks for and finds a more obvious and general case that demonstrates the bug, reopens it, and assigns it to the Programmer to fix. The programmer fixes the bg, resolves it as fixed, and assign it to the Tester. The tester confirms the fix and closes the bug.

You can see that a bug might undergo numerous changes and iterations over its life, sometimes looping back and starting the life all over again. Figure below takes the simple model above and adds to it possible decisions, approvals, and looping that can occur in most projects. Of course every software company and project will have its own system, but this figure is fairly generic and should cover most any bug life cycle that you’ll encounter



The generic life cycle has two additional states and extra connecting lines. The review state is where Project Manager or the committee, sometimes called a change Control Board, decides whether the bug should be fixed. In some projects all bugs go through the review state before they’re assigned to the programmer for fixing. In other projects, this may not occur until near the end of the project, or not at all. Notice that the review state can also go directly to the closed state. This happens if the review decides that the bug shouldn’t be fixed – it could be too minor is really not a problem, or is a testing error. The other is a deferred. The review may determine that the bug should be considered for fixing at sometime in the future, but not for this release of the software.

The additional line from resolved state back to the open state covers the situation where the tester finds that the bug hasn’t been fixed. It gets reopened and the bugs life cycle repeats.

The two dotted lines that loop from the closed and the deferred state back to the open state rarely occur but are important enough to mention. Since a Tester never gives up, its possible that a bug was thought to be fixed, tested and closed could reappear. Such bugs are often called Regressions. It’s possible that a deferred bug could later be proven serious enough to fix immediately. If either of these occurs, the bug is reopened and started through the process again. Most Project teams adopt rules for who can change the state of a bug or assign it to someone else.For example, maybe only the Project Manager can decide to defer a bug or only a tester is permitted to close a bug. What’s important is that once you log a bug, you follow it through its life cycle, don’t lose track of it, and prove the necessary information to drive it to being fixed and closed.

Friday, September 21, 2007

Testing Technics Introduction

_____Testing Technics_Introduction___________________________________


Because of the fallibility of its human designers and its own abstract, complex nature, software development must be accompanied by quality assurance activities. It is not unusual for developers to spend 40% of the total project time on testing. For life-critical software (e.g. flight control, reactor monitoring), testing can cost 3 to 5 times as much as all other activities combined. The destructive nature of testing requires that the developer discard preconceived notions of the correctness of his/her developed software.
Software Testing Fundamentals
Testing objectives include
1.Testing is a process of executing a program with the intent of finding an error.
2.A good test case is one that has a high probability of finding an as yet undiscovered error.
3.A successful test is one that uncovers an as yet undiscovered error.
Testing should systematically uncover different classes of errors in a minimum amount of time and with a minimum amount of effort. A secondary benefit of testing is that it demonstrates that the software appears to be working as stated in the specifications. The data collected through testing can also provide an indication of the software's reliability and quality. But, testing cannot show the absence of defect -- it can only show that software defects are present.
White Box Testing
White box testing is a test case design method that uses the control structure of the procedural design to derive test cases. Test cases can be derived that
1.guarantee that all independent paths within a module have been exercised at least once,
2.exercise all logical decisions on their true and false sides,
3.execute all loops at their boundaries and within their operational bounds, and
4.Exercise internal data structures to ensure their validity.
The Nature of Software Defects
Logic errors and incorrect assumptions are inversely proportional to the probability that a program path will be executed. General processing tends to be well understood while special case processing tends to be prone to errors.
We often believe that a logical path is not likely to be executed when it may be executed on a regular basis. Our unconscious assumptions about control flow and data lead to design errors that can only be detected by path testing.
Typographical errors are random.
Basis Path Testing
This method enables the designer to derive a logical complexity measure of a procedural design and use it as a guide for defining a basis set of execution paths. Test cases that exercise the basis set are guaranteed to execute every statement in the program at least once during testing.
Flow Graphs
Flow graphs can be used to represent control flow in a program and can help in the derivation of the basis set. Each flow graph node represents one or more procedural statements. The edges between nodes represent flow of control. An edge must terminate at a node, even if the node does not represent any useful procedural statements. A region in a flow graph is an area bounded by edges and nodes. Each node that contains a condition is called a predicate node. Cyclomatic complexity is a metric that provides a quantitative measure of the logical complexity of a program. It defines the number of independent paths in the basis set and thus provides an upper bound for the number of tests that must be performed.
The Basis Set
An independent path is any path through a program that introduces at least one new set of processing statements (must move along at least one new edge in the path). The basis set is not unique. Any number of different basis sets can be derived for a given procedural design. Cyclomatic complexity, V(G), for a flow graph G is equal to
1.The number of regions in the flow graph.
2.V(G) = E - N + 2 where E is the number of edges and N is the number of nodes.
3.V(G) = P + 1 where P is the number of predicate nodes.
Deriving Test Cases
1.From the design or source code, derive a flow graph.
2.Determine the cyclomatic complexity of this flow graph.
Even without a flow graph, V(G) can be determined by counting the number of conditional statements in the code.
3.Determine a basis set of linearly independent paths.
Predicate nodes are useful for determining the necessary paths.
4.Prepare test cases that will force execution of each path in the basis set. Each test case is executed and compared to the expected results.
Automating Basis Set Derivation
The derivation of the flow graph and the set of basis paths is amenable to automation. A software tool to do this can be developed using a data structure called a graph matrix. A graph matrix is a square matrix whose size is equivalent to the number of nodes in the flow graph. Each row and column correspond to a particular node and the matrix corresponds to the connections (edges) between nodes. By adding a link weight to each matrix entry, more information about the control flow can be captured. In its simplest form, the link weight is 1 if an edge exists and 0 if it does not. But other types of link weights can be represented:
•the probability that an edge will be executed,
•the processing time expended during link traversal,
•the memory required during link traversal, or
•the resources required during link traversal.
Graph theory algorithms can be applied to these graph matrices to help in the analysis necessary to produce the basis set.
Loop Testing
This white box technique focuses exclusively on the validity of loop constructs. Four different classes of loops can be defined:
1.simple loops,
2.nested loops,
3.concatenated loops, and
4.unstructured loops.
Simple Loops
The following tests should be applied to simple loops where n is the maximum number of allowable passes through the loop:
1.skip the loop entirely,
2.only pass once through the loop,
3.m passes through the loop where m < n,
4.n - 1, n, n + 1 passes through the loop.
Nested Loops
The testing of nested loops cannot simply extend the technique of simple loops since this would result in a geometrically increasing number of test cases. One approach for nested loops:
1.Start at the innermost loop. Set all other loops to minimum values.
2.Conduct simple loop tests for the innermost loop while holding the outer loops at their minimums. Add tests for out-of-range or excluded values.
3.Work outward, conducting tests for the next loop while keeping all other outer loops at minimums and other nested loops to typical values.
4.Continue until all loops have been tested.
Concatenated Loops
Concatenated loops can be tested as simple loops if each loop is independent of the others. If they are not independent (e.g. the loop counter for one is the loop counter for the other), then the nested approach can be used.
Unstructured Loops
This type of loop should be redesigned not tested!!!
Other White Box Techniques
Other white box testing techniques include:
1.Condition testing
oExercises the logical conditions in a program.
2.Data flow testing
oSelects test paths according to the locations of definitions and uses of variables in the program.
Black Box Testing
Introduction
Black box testing attempts to derive sets of inputs that will fully exercise all the functional requirements of a system. It is not an alternative to white box testing. This type of testing attempts to find errors in the following categories:
1.incorrect or missing functions,
2.interface errors,
3.errors in data structures or external database access,
4.performance errors, and
5.Initialization and termination errors.
Tests are designed to answer the following questions:
1.How is the function's validity tested?
2.What classes of input will make good test cases?
3.Is the system particularly sensitive to certain input values?
4.How are the boundaries of a data class isolated?
5.What data rates and data volume can the system tolerate?
6.What effect will specific combinations of data have on system operation?
White box testing should be performed early in the testing process, while black box testing tends to be applied during later stages. Test cases should be derived which
1.reduce the number of additional test cases that must be designed to achieve reasonable testing, and
2.Tell us something about the presence or absence of classes of errors, rather than an error associated only with the specific test at hand.
Equivalence Partitioning
This method divides the input domain of a program into classes of data from which test cases can be derived. Equivalence partitioning strives to define a test case that uncovers classes of errors and thereby reduces the number of test cases needed. It is based on an evaluation of equivalence classes for an input condition. An equivalence class represents a set of valid or invalid states for input conditions.
Equivalence classes may be defined according to the following guidelines:
1.If an input condition specifies a range, one valid and two invalid equivalence classes are defined.
2.If an input condition requires a specific value, then one valid and two invalid equivalence classes are defined.
3.If an input condition specifies a member of a set, then one valid and one invalid equivalence class are defined.
4.If an input condition is Boolean, then one valid and one invalid equivalence class are defined.
Boundary Value Analysis
This method leads to a selection of test cases that exercise boundary values. It complements equivalence partitioning since it selects test cases at the edges of a class. Rather than focusing on input conditions solely, BVA derives test cases from the output domain also. BVA guidelines include:
1.For input ranges bounded by a and b, test cases should include values a and b and just above and just below a and b respectively.
2.If an input condition specifies a number of values, test cases should be developed to exercise the minimum and maximum numbers and values just above and below these limits.
3.Apply guidelines 1 and 2 to the output.
4.If internal data structures have prescribed boundaries, a test case should be designed to exercise the data structure at its boundary.
Cause-Effect Graphing Techniques
Cause-effect graphing is a technique that provides a concise representation of logical conditions and corresponding actions. There are four steps:
1.Causes (input conditions) and effects (actions) are listed for a module and an identifier is assigned to each.
2.A cause-effect graph is developed.
3.The graph is converted to a decision table.
4.Decision table rules are converted to test cases.

TestDirector_Introduction

Introduction

TestDirector, the industry’s first global test management solution, helps organizations deploy high-quality applications more quickly and effectively. Its four modules Requirements, Test Plan, Test Lab, and Defects are seamlessly integrated, allowing for a smooth information flow between various testing stages. The completely Web-enabled TestDirector supports high levels of communication and collaboration among distributed testing teams, driving a more effective, efficient global application-testing process.
Features in TestDirector 7.5?

Web-based Site Administrator


The Site Administrator includes tabs for managing projects, adding users and defining user properties, monitoring connected users, monitoring licenses and monitoring TestDirector server information.

Domain Management


TestDirector projects are now grouped by domain. A domain contains a group of related TestDirector projects, and assists you in organizing and managing a large number of projects.

Enhanced Reports and Graphs


Additional standard report types and graphs have been added, and the user interface is richer in functionality. The new format enables you to customize more features.

Version Control


Version control enables you to keep track of the changes you make to the testing information in your TestDirector project. You can use your version control database for tracking manual, WinRunner and QuickTest Professional tests in the test plan tree and test grid.

Collaboration Module


The Collaboration module, available to existing customers as an optional upgrade, allows you to initiate an online chat session with another TestDirector user. While in a chat session, users can share applications and make changes.

Features in TestDirector 8.0?

TestDirector Advanced Reports Add-in


With the new Advanced Reports Add-in, TestDirector users are able to maximize the value of their testing project information by generating customizable status and progress reports. The Advanced Reports Add-in offers the flexibility to create custom report configurations and layouts, unlimited ways to aggregate and compare data and ability to generate cross-project analysis reports.

Automatic Traceability Notification


The new traceability automatically traces changes to the testing process entities such as requirements or tests, and notifies the user via flag or e-mail. For example, when the requirement changes, the associated test is flagged and tester is notified that the test may need to be reviewed to reflect requirement changes.

Coverage Analysis View in Requirements Module


The graphical display enables you to analyze the requirements according to test coverage status and view associated tests - grouped according to test status.

Hierarchical Test Sets


Hierarchical test sets provide the ability to better organize your test run process by grouping test sets into folders.

Workflow for all TestDirector Modules


The addition of the script editor to all modules enables organizations to customize TestDirector to follow and enforce any methodology and best practices.

Improved Customization


With a greater number of available user fields, ability to add memo fields and create input masks users can customize their TestDirector projects to capture any data required by their testing process. New rich edit option add color and formatting options to all memo fields.

TestDirector Features & Benefits

Supports the entire testing process


TestDirector incorporates all aspects of the testing process requirements management, planning, scheduling, running tests, issue management and project status analysis into a single browser-based application.

Leverages innovative Web technology


Testers, developers and business analysts can participate in and contribute to the testing process by working seamlessly across geographic and organizational boundaries.

Uses industry-standard repositories


TestDirector integrates easily with industry-standard databases such as SQL, Oracle, Access and Sybase.

Links test plans to requirements


TestDirector connects requirements directly to test cases, ensuring that functional requirements have been covered by the test plan.

Integrates with Microsoft Office


TestDirector can import requirements and test plans from Microsoft Office, preserving your investment and accelerating your testing process.

Manages manual and automated tests


TestDirector stores and runs both manual and automated tests, and can help jumpstart a user’s automation project by converting manual tests to automated test scripts.

Accelerates testing cycles


TestDirector's TestLab manager accelerates the test execution cycles by scheduling and running tests automatically—unattended, even overnight. The results are reported into TestDirector’s central repository, creating an accurate audit trail for analysis.

Supports test runs across boundaries


TestDirector allows testers to run tests on their local machines and then report the results to the repository that resides on a remote server.

Integrates with internal and third-party tools


Documented COM API allows TestDirector to be integrated both with internal tools (e.g., WinRunner and LoadRunner) and external third-party lifecycle applications.

Enables structured information sharing


TestDirector controls the information flow in a structured and organized manner. It defines the role of each tester in the process and sets the appropriate permissions to ensure information integrity.

Provides Analysis and Decision Support Tools


TestDirector's integrated graphs and reports help analyze application readiness at any point in the testing process. Using information about requirements coverage, planning progress, run schedules or defect statistics, managers are able to make informed decisions on whether the application is ready to go live.

Provides easy defect reporting


TestDirector offers a defect tracking process that can identify similar defects in a database.

Generates customizable reports


TestDirector features a variety of customizable graphs and reports that provide a snapshot of the process at any time during testing. You can save your favorite views to have instant access to relevant project information.

Supports decision-making through analysis


TestDirector helps you make informed decisions about application readiness through dozens of reports and analysis features.

Provides Anytime, Anywhere Access to Testing Assets


Using TestDirector's Web interface, testers, developers and business analysts can participate in and contribute to the testing process by collaborating across geographic and organizational boundaries.

Provides Traceability Throughout the Testing Process


TestDirector links requirements to test cases, and test cases to issues, to ensure traceability throughout the testing cycle. When requirement changes or the defect is fixed, the tester is notified of the change.

Integrates with Third-Party Applications


Whether an individual uses an industry standard configuration management solution, Microsoft Office or a homegrown defect management tool, any application can be integrated into TestDirector. Through the open API, TestDirector preserves the users’ investment in their existing solutions and enables them to create an end-to-end lifecycle-management solution.

Manages Manual and Automated Tests


TestDirector stores and runs both manual and automated tests, and can help jumpstart a user’s automation project by converting manual tests to automated test scripts.

Accelerates Testing Cycles


TestDirector's TestLab manager accelerates the test execution cycles by scheduling and running tests automatically—unattended, even overnight. The results are reported into TestDirector’s central repository, creating an accurate audit trail for analysis.

Supports test runs across boundaries


TestDirector allows testers to run tests on their local machines and then report the results to the repository that resides on a remote server.

Integrates with internal and third-party tools


Documented COM API allows TestDirector to be integrated both with internal tools (e.g., WinRunner and LoadRunner) and external third-party lifecycle applications.

Enables structured information sharing


TestDirector controls the information flow in a structured and organized manner. It defines the role of each tester in the process and sets the appropriate permissions to ensure information integrity.

Provides Analysis and Decision Support Tools


TestDirector's integrated graphs and reports help analyze application readiness at any point in the testing process. Using information about requirements coverage, planning progress, run schedules or defect statistics, managers are able to make informed decisions on whether the application is ready to go live.

Provides easy defect reporting


TestDirector offers a defect tracking process that can identify similar defects in a database.

Generates customizable reports


TestDirector features a variety of customizable graphs and reports that provide a snapshot of the process at any time during testing. You can save your favorite views to have instant access to relevant project information.

Supports decision-making through analysis


TestDirector helps you make informed decisions about application readiness through dozens of reports and analysis features.

Provides Anytime, Anywhere Access to Testing Assets


Using TestDirector's Web interface, testers, developers and business analysts can participate in and contribute to the testing process by collaborating across geographic and organizational boundaries.

Provides Traceability Throughout the Testing Process


TestDirector links requirements to test cases, and test cases to issues, to ensure traceability throughout the testing cycle. When requirement changes or the defect is fixed, the tester is notified of the change.

Integrates with Third-Party Applications


Whether an individual uses an industry standard configuration management solution, Microsoft Office or a homegrown defect management tool, any application can be integrated into TestDirector. Through the open API, TestDirector preserves the users’ investment in their existing solutions and enables them to create an end-to-end lifecycle-management solution.

Manages Manual and Automated Tests


TestDirector stores and runs both manual and automated tests, and can help jumpstart a user’s automation project by converting manual tests to automated test scripts.

Accelerates Testing Cycles


TestDirector's TestLab manager accelerates the test execution cycles by scheduling and running tests automatically unattended, even overnight. The results are reported into TestDirector’s central repository, creating an accurate audit trail for analysis.

Facilitates Consistent and Repetitive Testing Process


By providing a central repository for all testing assets, TestDirector facilitates the adoption of a more consistent testing process, which can be repeated throughout the application lifecycle or shared across multiple applications or lines of business (LOB).

Testing Process


Test management is a method for organizing application test assets—such as test requirements, test plans, test documentation, test scripts or test results—to enable easy accessibility and reusability. Its aim is to deliver quality applications in less time.

The test management process is the main principle behind Mercury Interactive's TestDirector. It is the first tool to capture the entire test management process—requirements management, test planning, test execution and defect management—in one powerful, scalable and flexible solution.

Managing Requirements


Requirements are what the users or the system needs. Requirements management, however, is a structured process for gathering, organizing, documenting and managing the requirements throughout the project lifecycle. Too often, requirements are neglected during the testing effort, leading to a chaotic process of fixing what you can and accepting that certain functionality will not be verified. In many organizations, requirements are maintained in Excel or Word documents, which makes it difficult for team members to share information and to make frequent revisions and changes.

TestDirector supports requirements-based testing and provides the testing team with a clear, concise and functional blueprint for developing test cases. Requirements are linked to tests—that is, when the test passes or fails, this information is reflected in the requirement records. You can also generate a test based on a functional requirement and instantly create a link between the requirement, the relevant test and any defects that are uncovered during the test run.

Test Planning


Based on the requirements, testers can start building the test plan and designing the actual tests. Today, organizations no longer wait to start testing at the end of the development stage, before implementation. Instead, testing and development begin simultaneously. This parallel approach to test planning and application design ensures that testers build a complete set of tests that cover every function the system is designed to perform.

TestDirector provides a centralized approach to test design, which is invaluable for gathering input from different members of the testing team and providing a central reference point for all of your future testing efforts. In the Test Plan module, you can design tests—manual and automated—document the testing procedures and create quick graphs and reports to help measure the progress of the test planning effort.

Running Tests


After you have addressed the test design and development issues and built the test plan, your testing team is ready to start running tests.

TestDirector can help configure the test environment and determine which tests will run on which machines. Most applications must be tested on different operating systems , different browser versions or other configurations. In TestDirector's Test Lab, testers can set up groups of machines to most efficiently use their lab resources.

TestDirector can also schedule automated tests, which saves testers time by running multiple tests simultaneously across multiple machines on the network. Tests with TestDirector can be scheduled to run unattended, overnight or when the system is in least demand for other tasks. For both manual and automated tests, TestDirector can keep a complete history of all test runs. By using this audit trail, testers can easily trace changes to tests and test runs.

Managing Defects


The keys to creating a good defect management process are setting up the defect workflow and assigning permission rules. With TestDirector, you can clearly define how the lifecycle of a defect should progress, who has the authority to open a new defect, who can change a defect's status to "fixed" and under which conditions the defect can be officially closed. TestDirector will also help you maintain a complete history and audit trail throughout the defect lifecycle.

Managers often decide whether the application is ready to go live based on defect analysis. By analyzing the defect statistics in TestDirector, you can take a snapshot of the application under test and see exactly how many defects you currently have, their status, severity, priority, age, etc. Because TestDirector is completely Web-based, different members of the team can have instant access to defect information, greatly improving communication in your organization and ensuring everyone is up to date on the status of the application

Wednesday, September 19, 2007

How to do load testing with Visual Studio Team System?

How to do load testing with Visual Studio Team System?

Microsoft Visual Studion Team Edition for Testers provides a tool for creating and running load tests. The primary goal of a load test is to simulate many users accessing a server at the same time.

When you add Web tests to a load test, you simulate multiple users opening simultaneous connections to a server and making multiple HTTP requests. You can set properties on load tests that broadly apply to the individual Web tests.

When you add unit tests to a load test, you exercise the performance of non-Web based server components. An example application of a unit test under load is to test data access model components.

Load tests can be used with a set of computers known as a rig, which consists of agents and a controller.

Load tests are used in several different types of testing:

Type of Testing
Description
Smoke
How your application performs under light loads for short durations.

Stress
To determine if the application will run successfully for a sustained duration under heavy load.

Performance
To determine how responsive your application is.

Capacity Planning
How your application performs at various capacities.


About Load Tests
Load tests consist of a series of Web tests or unit tests which operate under multiple simulated users over a period of time. Load tests are created with the Load Test Wizard. For more information about the Load Test Wizard,

To change the load test properties, use the Load Test Editor. The properties allow you to run Web tests with different user profiles, browser targets, and load patterns. Test results are stored in SQL-based Load Test Results Store.
View your load tests as they run in the Load Test Monitor. To view load test results for completed test runs, use the Load Test Analyzer.

Security
Load test files and load test results contain potentially sensitive information that could be used to build an attack against your computer or your network. Load tests and load test results contain computer names and connection strings. You should be aware of this when sharing tests or test results with others

Defect Management Process

Defect Management Process


Defect Prevention -- Implementation of techniques, methodology and standard processes to reduce the risk of defects.

Deliverable Baseline -- Establishment of milestones where deliverables will be considered complete and ready for further development work. When a deliverable is baselined, any further changes are controlled. Errors in a deliverable are not considered defects until after the deliverable is baselined.

Defect Discovery -- Identification and reporting of defects for development team acknowledgment. A defect is only termed discovered when it has been documented and acknowledged as a valid defect by the development team member(s) responsible for the component(s) in error.

Defect Resolution -- Work by the development team to prioritize, schedule and fix a defect, and document the resolution. This also includes notification back to the tester to ensure that the resolution is verified.

Process Improvement -- Identification and analysis of the process in which a defect originated to identify ways to improve the process to prevent future occurrences of similar defects. Also the validation process that should have identified the defect earlier is analyzed to determine ways to strengthen that process.

Management Reporting -- Analysis and reporting of defect information to assist management with risk management, process improvement and project management.

Saturday, September 15, 2007

Types of Testing

What's Ad Hoc Testing ?
A testing where the tester tries to break the software by randomly trying functionality of software.

What's the Accessibility Testing ?
Testing that determines if software will be usable by people with disabilities.

What's the Alpha Testing ?
The Alpha Testing is conducted at the developer sites and in a controlled environment by the end user of the software

What's the Beta Testing ?
Testing the application after the installation at the client place.

What is Component Testing ?
Testing of individual software components (Unit Testing).

What's Compatibility Testing ?
In Compatibility testing we can test that software is compatible with other elements of system.

What is Concurrency Testing ?
Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores.

What is Conformance Testing ?
The process of testing that an implementation conforms to the specification on which it is based. Usually applied to testing conformance to a formal standard.

What is Context Driven Testing ?
The context-driven school of software testing is flavor of Agile Testing that advocates continuous and creative evaluation of testing opportunities in light of the potential information revealed and the value of that information to the organization right now.

What is Data Driven Testing ?
Testing in which the action of a test case is parameterized by externally defined data values, maintained as a file or spreadsheet. A common technique in Automated Testing.

What is Conversion Testing ?
Testing of programs or procedures used to convert data from existing systems for use in replacement systems.

What is Dependency Testing ?
Examines an application's requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.

What is Depth Testing ?
A test that exercises a feature of a product in full detail.

What is Dynamic Testing ?
Testing software through executing it. See also Static Testing.

What is Endurance Testing ?
Checks for memory leaks or other problems that may occur with prolonged execution.

What is End-to-End testing ?
Testing a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

What is Exhaustive Testing ?
Testing which covers all combinations of input values and preconditions for an element of the software under test.

What is Gorilla Testing ?
Testing one particular module, functionality heavily.

What is Installation Testing ?
Confirms that the application under test recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.
What is Localization Testing ?
This term refers to making software specifically designed for a specific locality.

What is Loop Testing?
A white box testing technique that exercises program loops.

What is Mutation Testing?
Mutation testing is a method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes ('bugs') and retesting with the original test data/cases to determine if the 'bugs' are detected. Proper implementation requires large computational resources

What is Monkey Testing ?
Testing a system or an Application on the fly, i.e just few tests here and there to ensure the system or an application does not crash out.

What is Positive Testing ?
Testing aimed at showing software works. Also known as "test to pass". See also Negative Testing.
What is Negative Testing?
Testing aimed at showing software does not work. Also known as "test to fail". See also Positive Testing.

What is Path Testing ?
Testing in which all paths in the program source code are tested at least once.

What is Performance Testing ?
Testing conducted to evaluate the compliance of a system or component with specified performance requirements. Often this is performed using an automated test tool to simulate large number of users. Also know as "Load Testing".

What is Ramp Testing?
Continuously raising an input signal until the system breaks down.

What is Recovery Testing?
Confirms that the program recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.

What is the Re-testing testing ?
Retesting- Again testing the functionality of the application.

What is the Regression testing ?
Regression- Check that change in code have not effected the working functionality

What is Sanity Testing ?
Brief test of major functional elements of a piece of software to determine if its basically operational.

What is Scalability Testing ?
Performance testing focused on ensuring the application under test gracefully handles increases in work load.

What is Security Testing ?
Testing which confirms that the program can restrict access to authorized personnel and that the authorized personnel can access the functions available to their security level.

What is Stress Testing ?
Stress testing is a form of testing that is used to determine the stability of a given system or entity. It involves testing beyond normal operational capacity, often to a breaking point, in order to observe the results.

What is Smoke Testing ?
A quick-and-dirty test that the major functions of a piece of software work. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire.

What is Soak Testing ?
Running a system at high load for a prolonged period of time. For example, running several times more transactions in an entire day (or night) than would be expected in a busy day, to identify and performance problems that appear after a large number of transactions have been executed.

What's the Usability testing ?
Usability testing is for user friendliness.

What's the User acceptance testing ?
User acceptance testing is determining if software is satisfactory to an end-user or customer.
What's the Volume Testing ?
We can perform the Volume testing, where the system is subjected to large volume of data.

1 : With thorough testing it is possible to remove all defects from a program prior to delivery to the customer.
a. True
b. False

2 : Which of the following are characteristics of testable software ?
a. observability
b. simplicity
c. stability
d. all of the above

3 : The testing technique that requires devising test cases to demonstrate that each program function is operational is called
a. black-box testing
b. glass-box testing
c. grey-box testing
d. white-box testing

4 : The testing technique that requires devising test cases to exercise the internal logic of a software module is called
a. behavioral testing
b. black-box testing
c. grey-box testing
d. white-box testing

5 : What types of errors are missed by black-box testing and can be uncovered by white-box testing ?
a. behavioral errors
b. logic errors
c. performance errors
d. typographical errors
e. both b and d

6 : Program flow graphs are identical to program flowcharts.
a. True
b. False

7 : The cyclomatic complexity metric provides the designer with information regarding the number of
a. cycles in the program
b. errors in the program
c. independent logic paths in the program
d. statements in the program

8 : The cyclomatic complexity of a program can be computed directly from a PDL representation of an algorithm without drawing a program flow graph.
a. True
b. False

9 : Condition testing is a control structure testing technique where the criteria used to design test cases is that they
a. rely on basis path testing
b. exercise the logical conditions in a program module
c. select test paths based on the locations and uses of variables
d. focus on testing the validity of loop constructs

10 : Data flow testing is a control structure testing technique where the criteria used to design test cases is that they
a. rely on basis path testing
b. exercise the logical conditions in a program module
c. select test paths based on the locations and uses of variables
d. focus on testing the validity of loop constructs

11 : Loop testing is a control structure testing technique where the criteria used to design test cases is that they
a. rely basis path testing
b. exercise the logical conditions in a program module
c. select test paths based on the locations and uses of variables
d. focus on testing the validity of loop constructs

12 : Black-box testing attempts to find errors in which of the following categories
a. incorrect or missing functions
b. interface errors
c. performance errors
d. all of the above
e. none of the above

13 : Graph-based testing methods can only be used for object-oriented systems
a. True
b. False

14 : Equivalence testing divides the input domain into classes of data from which test cases can be derived to reduce the total number of test cases that must be developed.
a. True
b. False

15 : Boundary value analysis can only be used to do white-box testing.
a. True
b. False

16 : Comparison testing is typically done to test two competing products as part of customer market analysis prior to product release.
a. True
b. False

17 : Orthogonal array testing enables the test designer to maximize the coverage of the test cases devised for relatively small input domains.
a. True
b. False

18 : Test case design "in the small" for OO software is driven by the algorithmic detail of
the individual operations.
a. True
b. False

19 : Encapsulation of attributes and operations inside objects makes it easy to obtain object state information during testing.
a. True
b. False

20 : Use-cases can provide useful input into the design of black-box and state-based tests of OO software.
a. True
b. False

21 : Fault-based testing is best reserved for
a. conventional software testing
b. operations and classes that are critical or suspect
c. use-case validation
d. white-box testing of operator algorithms

22 : Testing OO class operations is made more difficult by
a. encapsulation
b. inheritance
c. polymorphism
d. both b and c

23 : Scenario-based testing
a. concentrates on actor and software interaction
b. misses errors in specifications
c. misses errors in subsystem interactions
d. both a and b

24 : Deep structure testing is not designed to
a. examine object behaviors
b. exercise communication mechanisms
c. exercise object dependencies
d. exercise structure observable by the user

25 : Random order tests are conducted to exercise different class instance life histories.
a. True
b. False
26 : Which of these techniques is not useful for partition testing at the class level
a. attribute-based partitioning
b. category-based partitioning
c. equivalence class partitioning
d. state-based partitioning
27 : Multiple class testing is too complex to be tested using random test cases.
a. True
b. False
28 : Tests derived from behavioral class models should be based on the
a. data flow diagram
b. object-relation diagram
c. state diagram
d. use-case diagram
29 : Client/server architectures cannot be properly tested because network load is highly variable.
a. True
b. False
30 : Real-time applications add a new and potentially difficult element to the testing mix
a. performance
b. reliability
c. security
d. time
1. What is the meaning of COSO ?
a. Common Sponsoring Organizations
b. Committee Of Sponsoring Organizations
c. Committee Of Standard Organizations
d. Common Standard Organization
e. None of the above
2. Which one is not key term used in internal control and security
a. Threat
b. Risk Control
c. Vulnerability
d. Exposure
e. None
3. Management is not responsible for an organization internal control system
a. True
b. False

4. Who is ultimate responsible for the internal control system
a. CEO
b. Project Manager
c. Technical Manager
d. Developer
e. Tester
5. Who will provide important oversight to the internal control system
a. Board of Directors
b. Audit Committee
c. Accounting Officers
d. Financial Officers
e. both a & b
f. both c & d
6. The sole purpose of the Risk Control is to avoid risk
a. True
b. False
7. Management control involves limiting access to computer resources
a. True
b. False
8. Software developed by contractors who are not part of the organization is referred to as in sourcing organizations
a. True
b. False
9. Which one is a not tester responsibility?
a. Assure the process for contracting software is adequate
b. Review the adequacy of the contractors test plan
c. Perform acceptance testing on the software
d. Assure the ongoing operation and maintenance of the contracted software
e. None of the above
10. The software tester may or may not be involved in the actual acceptance testing
a. True
b. False
11. In the client systems, testing should focus on performance and compatibility
a. True
b False
12. A database access applications typically consists of following elements except
a. User Interface code
b. Business login code
c. Data-access service code
d. Data Driven code
13. Wireless technologies represent a rapidly emerging area of growth and importance for providing ever-present access to the internet and email.
a. True b. False

14. Acceptance testing involves procedures for identifying acceptance criteria for interim life cycle products and for accepting them.
a. True
b. False
15. Acceptance testing is designed whether or not the software is “fit” for the user to use. The concept of “fit” is important in both design and testing. There are four components of “fit”.
a. True
b. False
16. Acceptance testing occurs only at the end point of the development process; it should be an ongoing activity that test both interim and final products.
a. True
b. False
17. Acceptance requirement that a system must meet can be divided into ________ categories.
a. Two
b. Three
c. Four
d. Five
18. _______ Categories of testing techniques can be used in acceptance testing.
a. Two
b. Three
c. Four
d. Five
19. _____________ define the objectives of the acceptance activities and a plan for meeting them.
a. Project Manager
b. IT Manager
c. Acceptance Manager
d. ICO
20. Software Acceptance testing is the last opportunity for the user to examine the software for functional, interface, performance, and quality features prior to the final acceptance review.
a. True
b. False

Try to answer these questions friends. All these are asked in various interviews.
What is Software Testing ?
What is the Purpose of Testing?
What types of testing do testers perform?
What is the Outcome of Testing ?
What kind of testing have you done ?
What is the need for testing ?
How do you determine, what to be tested ?
How do you go about testing a project ?

What is the Initial Stage of testing ?
What are the various levels of testing ?
What are the Minimum requirements to start testing ?
What is test metrics ?
Why do you go for White box testing, when Black box testing is available ?
What are the entry criteria for Automation testing ?
When to start and Stop Testing ?
What is Quality ?
What is quality assurance ?
What is quality control ?
What is verification ?
What is validation ?
What is SDLC and TDLC ?
What are the Qualities of a Tester ?
What is the relation ship between Quality & Testing ?
What are the types of testing you know and you experienced ?
After completing testing, what would you deliver to the client ?
What is a Test Bed ?
Why do you go for Test Bed ?
What is a Data Guidelines ?
What is Severity and Priority and who will decide what ?
Can Automation testing replace manual testing ? If it so, how ?
What is a test case?
What is a test condition ?
What is the test script ?
What is the test data ?
What is an Inconsistent bug ?
What is the difference between Re-testing and Regression testing ?
What are the different types of testing techniques ?
What are the different types of test case techniques ?
What are the risks involved in testing ?
Differentiate Test bed and Test Environment ?
What ifs the difference between defect, error, bug, failure, fault ?
What is the difference between quality and testing ?
What is the difference between White & Black Box Testing ?
What is the difference between Quality Assurance and Quality Control ?
What is the difference between Testing and debugging ?
What is the difference between bug and defect ?
What is the difference between verification and validation ?
What is the difference between functional spec. and Business requirement specification ?
What is the difference between unit testing and integration testing ?
What is the diff between Volume & Load ?


30 WinRunner Interview Questions

which scripting language used by WinRunner?
WinRunner uses TSL-Test Script Language (Similar to C)
What's the WinRunner ?
WinRunner is Mercury Interactive Functional Testing Tool.
How many types of Run Modes are available in WinRunner?
WinRunner provide three types of Run Modes.
Verify Mode
Debug Mode
Update Mode
What's the Verify Mode?
In Verify Mode, WinRunner compare the current result of application to it's expected result.
What's the Debug Mode?
In Debug Mode, WinRunner track the defects in a test script.
What's the Update Mode?
In Update Mode, WinRunner update the expected results of test script.
How many types of recording modes available in WinRunner?
WinRunner provides two types of Recording Mode:
Context Sensitive
Analog
What's the Context Sensitive recording ?
WinRunner captures and records the GUI objects, windows, keyboard inputs, and mouse click activities through Context Sensitive Recording.
When Context Sensitive mode is to be chosen?
a. The application contains GUI objects
b. Does not require exact mouse movements.
What's the Analog recording?
It captures and records the keyboard inputs, mouse click and mouse movement. It's not captures the GUI objects and Windows.
When Analog mode is to be chosen?
a. The application contains bitmap areas.
b. Does require exact mouse movements.
What are the components of WinRunner ?
a. Test Window: This is a window where the TSL script is generated/programmed.
b. GUI Spy tool : WinRunner lets you spy on the GUI objects by recording the Properties.
Where are stored Debug Result?
Debug Results are always saved in debug folder.

What's WinRunner testing process ?
WinRunner involves six main steps in testing process.
Create GUI map
Create Test
Debug Test
Run Test
View Results
Report Defects
What's the GUI SPY?
You can view the physical properties of objects and windows through GUI SPY.
How many types of modes for organizing GUI map files?
WinRunner provides two types of modes-
Global GUI map files
Per Test GUI map files
What's the contained in GUI map files?
GUI map files stored the information, it learns about the GUI objects and windows.
How does WinRunner recognize objects on the application ?
WinRunner recognize objects on the application through GUI map files.
What's the difference between GUI map and GUI map files ?
The GUI map is actually the sum of one or more GUI map files.
How do you view the GUI map content ?
We can view the GUI map content through GUI map editor.
What's the checkpoint?
Checkpoint enables you to check your application by comparing it's expected results of application to actual results.
What's the Execution Arrow?
Execution Arrow indicates the line of script being executed.
What's the Insertion Point ?
Insertion point indicates the line of script where you can edit and insert the text.
What's the Synchronization?
Synchronization is enables you to solve anticipated timing problems between test and application.
What's the Function Generator?
Function Generator provides the quick and error free way to add TSL function on the test script.
How many types of checkpoints are available in WinRunner ?
WinRunner provides four types of checkpoints-
GUI Checkpoint
Bitmap Checkpoint
Database Checkpoint
Text Checkpoint
what’s contained in the Test Script?
Test Script contained the Test Script Language.
How do you modify the logical name or the physical description of the objects in GUI map?
We can modify the logical name or the physical description of the objects through GUI map editor.

What are the Data Driven Test ?
When you want to test your application, you may want to check how it performance same operation with the multiple sets of data.
How do you record a Data Driven Test ?
We can create a Data Driven Test through Flat Files, Data Tables, and Database.
How do you clear a GUI map files ?
We can clear the GUI map files through "CLEAR ALL" option.
What are the steps of creating a Data Driven Test ?
Data Driven Testing have four steps-
Creating test
Converting into Data Driven Test
Run Test
Analyze test
What is Rapid Test Script Wizard ?
It performs two tasks.
a. It systematically opens the windows in your application and learns a description of every GUI object. The wizard stores this information in a GUI map file.
b. It automatically generates tests base on the information it learned as it navigated through the application.
What are the different modes in learning an application under Rapid test script wizard ?
a. Express
b. Comprehensive.
What's the extension of GUI map files ?
GUI map files extension is ".gui".
What statement generated by WinRunner when you check any objects ?
Obj_check_gui statement.
What statement generated by WinRunner when you check any windows ?
Win_check_gui statement
What statement generated by WinRunner when you check any bitmap image over the objects ?
Obj_check_bitmap statement
What statement generated by WinRunner when you check any bitmap image over the windows ?
Win_check_bitmap statement
What statement used by WinRunner in Batch Testing ?
"Call" statement.
Which short key is used to freeze the GUI Spy ?
"Ctrl+F3"
How many types of parameter used by WinRunner ?
WinRunner provides three types of Parameter-
Test
Data Driven
Dynamic

How many types of Merging used by WinRunner?
WinRunner used two types of Merging-
Auto
Manual
What's the Virtual Objects Wizard?
Whenever WinRunner is not able to read an object as an objects then it uses the Virtual Objects Wizard.
How do you handle unexpected events and errors ?
WinRunner uses the Exception Handling function to handle unexpected events and errors.
How do you comment your script ?
We comment script or line of the script by inserting "#" at the beginning of script line.
What's the purpose of the Set_Windows command?
Set_Window command set the focus to the specified windows.
How you created your test script ?
Programming.
What's a command to invoke application?
Invoke_application
What do you mean by the logical name of objects ?
Logical name of an objects is determined by it's class but in most cases, the logical name is the label that appear on an objects.
How many types of GUI checkpoints ?
In Winrunner, three types of GUI checkpoints-
For Single Properties
For Objects/Windows
For Multiple Objects
How many types of Bitmap Checkpoints ?
In Winrunner, two types of Bitmap Checkpoints-
For Objects/Windows
For Screen Area
How many types of Database Checkpoints?
In Winrunner, three types of Database Checkpoints-
Default Check
Custom Check
Runtime Record Check
How many types of Text Checkpoints?
In Winrunner, four types of Text Checkpoints-
For Objects/Windows
From Screen Area
From Selection (Web Only)
Web text Checkpoints
What add-ins are available for WinRunner ?
Add-ins are available for Java, ActiveX, WebTest, Siebel, Baan, Stingray, Delphi, Terminal Emulator, Forte, NSDK/Natstar, Oracle and PowerBuilder.

Notes:
* Winrunner generates menu_select_item statement whenever you select any menu items.
* Winrunner generates set_window statement whenever you begin working in new window.
* Winrunner generates edit_set statement whenever you enter keyboard inputs.
* Winrunner generates obj_mouse_click statement whenever you click any object through mouse pointer.
* Winrunner generates obj_wait_bitmap or win_wait_bitmap statements whenever you synchronize the script through objects or windows.
* The ddt_open statement opens the table.
* The ddt_close statement closes the table.
* Winrunner inserts a win_get_text or obj_get_text statements in script for checking the text.
* The button_press statement press the buttons.
* Winrunner generates list_item_select statement whenever you want to select any value in drop-down menu.
* We can compare the two files in Winruuner using the file_compare function.
* tl_step statement used to determine whether section of a test pass or fail.
* Call_Close statement close the test when the test is completed

32 QTP Interview Questions

Full form of QTP ?
Quick Test Professional
What's the QTP ?
QTP is Mercury Interactive Functional Testing Tool.
Which scripting language used by QTP ?
QTP uses VB scripting.
What's the basic concept of QTP ?
QTP is based on two concept-
* Recording
* Playback
How many types of recording facility are available in QTP ?
QTP provides three types of recording methods-
* Context Recording (Normal)
* Analog Recording
* Low Level Recording
How many types of Parameters are available in QTP ?
QTP provides three types of Parameter-
* Method Argument
* Data Driven
* Dynamic

What's the QTP testing process?
QTP testing process consist of seven steps-
* Preparing to recoding
* Recording
* Enhancing your script
* Debugging
* Run
* Analyze
* Report Defects

What's the Active Screen ?
It provides the snapshots of your application as it appeared when you performed a certain steps during recording session.
What's the Test Pane ?
Test Pane contains Tree View and Expert View tabs.
What's Data Table ?
It assists to you about parameterizing the test.
What's the Test Tree ?
It provides graphical representation of your operations which you have performed with your application.
Which all environment QTP supports ?
ERP/ CRM
Java/ J2EE
VB, .NET
Multimedia, XML
Web Objects, ActiveX controls
SAP, Oracle, Siebel, PeopleSoft
Web Services, Terminal Emulator
IE, NN, AOL
How can you view the Test Tree ?
The Test Tree is displayed through Tree View tab.
What's the Expert View ?
Expert View display the Test Script.
Which keyword used for Nornam Recording ?
F3
Which keyword used for run the test script ?
F5
Which keyword used for stop the recording ?
F4
which keyword used for Analog Recording ?
Ctrl+Shift+F4
which keyword used for Low Level Recording ?
Ctrl+Shift+F3

Which keyword used for switch between Tree View and Expert View ?
Ctrl+Tab
What's the Transaction ?
You can measure how long it takes to run a section of your test by defining transactions.
Where you can view the results of the checkpoint?
You can view the results of the checkpoints in the Test Result Window.
What's the Standard Checkpoint?
Standard Checkpoints checks the property value of an object in your application or web page.
Which environment are supported by Standard Checkpoint ?
Standard Checkpoint are supported for all add-in environments.
What's the Image Checkpoint ?
Image Checkpoint check the value of an image in your application or web page.
Which environments are supported by Image Checkpoint ?
Image Checkpoint are supported only Web environment.
What's the Bitmap Checkpoint ?
Bitmap Checkpoint checks the bitmap images in your web page or application.
Which environment are supported by Bitmap Checkpoints ?
Bitmap checkpoints are supported all add-in environment.
What's the Table Checkpoints ?
Table Checkpoint checks the information with in a table.
Which environments are supported by Table Checkpoint ?
Table Checkpoints are supported only ActiveX environment.
What's the Text Checkpoint ?
Text Checkpoint checks that a test string is displayed in the appropriate place in your application or on web page.
Which environment are supported by Test Checkpoint ?
Text Checkpoint are supported all add-in environments
Note:
* QTP records each steps you perform and generates a test tree and test script.
* QTP records in normal recording mode.
* If you are creating a test on web object, you can record your test on one browser and run it on another browser.
* Analog Recording and Low Level Recording require more disk space than normal recording mode.

Web sites are essentially client/server applications - with web servers and 'browser' clients. Consideration should be given to the interactions between html pages, TCP/IP communications, Internet connections, firewalls, applications that run in web pages (such as applets, JavaScript, plug-in applications), and applications that run on the server side (such as cgi scripts, database interfaces, logging applications, dynamic page generators, asp, etc.). Additionally, there are a wide variety of servers and browsers, various versions of each, small but sometimes significant differences between them, variations in connection speeds, rapidly changing technologies, and multiple standards and protocols. The end result is that testing for web sites can become a major ongoing effort. Other considerations might include:
1. What are the expected loads on the server (e.g., number of hits per unit time?), and what kind of performance is required under such loads (such as web server response time, database query response times). What kinds of tools will be needed for performance testing (such as web load testing tools, other tools already in house that can be adapted, web robot downloading tools, etc.)?
2. Who is the target audience? What kind of browsers will they be using? What kind of connection speeds will they by using? Are they intra- organization (thus with likely high connection speeds and similar browsers) or Internet-wide (thus with a wide variety of connection speeds and browser types)?
3. What kind of performance is expected on the client side (e.g., how fast should pages appear, how fast should animations, applets, etc. load and run)?
4 Will down time for server and content maintenance/upgrades be allowed? how much?
What kinds of security (firewalls, encryptions, passwords, etc.) will be required and what is it expected to do? How can it be tested?
How reliable are the site's Internet connections required to be? And how does that affect backup system or redundant connection requirements and testing?
What processes will be required to manage updates to the web site's content, and what are the requirements for maintaining, tracking, and controlling page content, graphics, links, etc.?
Which HTML specification will be adhered to? How strictly? What variations will be allowed for targeted browsers?
Will there be any standards or requirements for page appearance and/or graphics throughout a site or parts of a site??
How will internal and external links be validated and updated? how often?
Can testing be done on the production system, or will a separate test system be required? How are browser caching, variations in browser option settings, dial-up connection variabilities, and real-world internet 'traffic congestion' problems to be accounted for in testing?
How extensive or customized are the server logging and reporting requirements; are they considered an integral part of the system and do they require testing?
How are cgi programs, applets, JavaScript, ActiveX components, etc. to be maintained, tracked, controlled, and tested?
Pages should be 3-5 screens max unless content is tightly focused on a single topic. If larger, provide internal links within the page.
The page layouts and design elements should be consistent throughout a site, so that it's clear to the user that they're still within a site.
Pages should be as browser-independent as possible, or pages should be provided or generated based on the browser-type.
All pages should have links external to the page; there should be no dead-end pages.
The page owner, revision date, and a link to a contact person or organization should be included on each page.

Defect Severity determines the defect's effect on the application where as Defect Priority determines the defect urgency of repair.

Severity is given by Testers and Priority by Developers

1. High Severity & Low Priority: For example an application which generates some banking related reports weekly, monthly, quarterly & yearly by doing some calculations. If there is a fault while calculating yearly report. This is a high severity fault but low priority because this fault can be fixed in the next release as a change request.

2. High Severity & High Priority: In the above example if there is a fault while calculating weekly report. This is a high severity and high priority fault because this fault will block the functionality of the application immediately within a week. It should be fixed urgently.

3. Low Severity & High Priority: If there is a spelling mistake or content issue on the homepage of a website which has daily hits of lakhs. In this case, though this fault is not affecting the website or other functionalities but considering the status and popularity of the website in the competitive market it is a high priority fault.

4. Low Severity & Low Priority: If there is a spelling mistake on the pages which has very less hits throughout the month on any website. This fault can be considered as low severity and low priority.


Testing types
* Functional testing here we are checking the behavior of the software.
* Non-functional testing here we are checking the performance, usability, volume, security.

Testing methodologies
* Static testing : In static testing we are not executing the code
ex: Walk throughs, Inspections, Review

* Dynamic testing: In dynamic testing we are executing the code
ex: Black box , White box

Testing techniques
* White box
* Black box

Testing levels
* Unit testing
* Integration testing
* System testing
* Acceptance testing


Types of Black Box Testing

Functional testing - black-box type testing geared to functional requirements of an application; this type of testing should be done by testers. This doesn't mean that the programmers shouldn't check that their code works before releasing it (which of course applies to any stage of testing.)

System testing - testing is based on overall requirements specifications; covers all combined parts of a system.

Integration testing - testing combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially mainly to client/server and distributed systems.

Incremental integration testing - continuous testing of an application as new functionality is added; requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers.

End-to-end testing - similar to system testing; the 'macro' end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

Sanity testing - typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or destroying databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state.

Regression testing - re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing.

Load testing - testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system's response time degrades or fails.

Stress testing - term often used interchangeably with 'load' and 'performance' testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc.

Performance testing - term often used interchangeably with 'stress' and 'load' testing. Ideally 'performance' testing (and any other 'type' of testing) is defined in requirements documentation or QA or Test Plans.

Usability testing - testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.

Install/uninstall testing - testing of full, partial, or upgrade install/uninstall processes

Recovery testing - testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.

Security testing - testing how well the system protects against unauthorized internal or external access, wilful damage, etc; may require sophisticated testing techniques.

Computability testing - testing how well software performs in a particular hardware/software/operating system/network/etc. environment

Acceptance testing - determining if software is satisfactory to a customer.

Comparison testing - comparing software weaknesses and strengths to competing products

Alpha testing - testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers.

Beta testing - testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers.
1. What is the name of the testing activity that is done on a product before implementing that product into the Client side.
ANS : User Interface Testing

2. what is diff bet QA & QC activities?
ANS : QA Measure the Process Quality where as QC Measures the Product Quality

3. What is path coverage?
ANS : Path coverage testing is the most comprehensive type of testing that a test suite can provide. It can find more bugs in a program, especially those that are caused by data coupling. However, path coverage is a testing level that is very hard to achieve, and usually only small and/or critical sections of code are checked in this way.

4. How many path coverage are possible…………….
ANS :

5. Is performance testing done in unit and system testing?
ANS: YES

6. UAT is done in__________ ______
A. Client's place by the client.
B. Client's place by the tester
C .In the company by the client.

7. Critical Defect can also be termed as --------------
ANS : Show Stopper

8. what is Static& dynamic testing?
ANS : Static means proper testing and Dynamic means Code review or you can say code walkthrough . testing without code execution is static where as with coding being
executed dynamic.
9. what are type of integration testing?
Bottom Up, Top Down, Big Ban and Hybrid.
10. Software has more bugs it is due to?
many things, few are unclear requirements, poor design,coding and
testing, poor quality and mis communication.

11. Starting a Hero Honda Bike

(1) Requirements: starting a bike can be by two ways, you need to
have kick rod or button, before that other requirements like petrol,
engine, accelerator, ignition etc etc.


(2) Usability: shall have flexible kick rod, button system,
speed meter for speed reading.
Try to read the user manual.


(3) Functional: kick the kick rod or push the button, able
to here engine start sound, able to accelerate the bike, able to see
the speed in speed meter.


(4) Non-Functional: Check for performance able to start the bike more
than once immediately, after certain time etc.
What time you are able to start, what is the deviation from kick
rod and button start. This is the process


From this you in a good position to write test cases…..

Follow

(1) Requirements
(2) Usability (GUI + User Manual).
(3) Functionality.
(4) Non-Functionality.
These are the four aspects you need to concentrate on. Practice
writing test cases for anything (Stapler, Glass, Bucket, bike, mouse,
Notepad, paint, calculator, ATM, Key board & anything.

1. What is the name of the testing activity that is
done on a product before implementing that product
into the Client side?
Answer: Beta Testing.
3. A document that properly specifies all
requirements of the customer is_________.
Answer: SRS/BRS
4.The ____________ is used to simulate the
“Lower Interfacing Modules” in the
Top-Down Approach.
Answer: Interim Stubs.
6. What is path coverage?
Answer.
coverage: The degree, expressed as a percentage, to which a specified coverage item has been
exercised by a test suite.

path coverage: The percentage of paths that have been exercised by a test suite. 100% path
coverage implies 100% LCSAJ coverage.

path testing: A white box test design technique in which test cases are designed to execute paths.

7. How many path coverage are possible…………….
Answer: the no of paths are depending on the code, I think there is no limit for that
8. Is performance testing done in unit and system
testing?
Answer: False,performance testing will be done only in system testing
9. UAT is done in
Answer: There are two types of UAT Alpha and Beta
Alpha testing: Done by Real customer at the developer site
Beta Testing: Done by end user in the customer site
10. Critical Defect can also be termed as
Ans: Show Stopper
11.What is the combination of Grey Box Testing.
Ans: I am not sure that there is any predefined statistic for this
12. what is cyclometric No. where we use it?
Ans:
cyclomatic complexity: The number of independent paths through a program. Cyclomatic
complexity is defined as: L – N + 2P, where
- L = the number of edges/links in a graph
- N = the number of nodes in a graph
- P = the number of disconnected parts of the graph (e.g. a called graph and a subroutine)

13. How can it be known when to stop testing?
Ans:When major bugs are fixed in all of the modules
when we reached to enough confidence level that we can release the product but some times there will be no time to cover all the modules at that time we will go for
ad-hoc testing.
14. what are types of integration testing?
Ans:Top-Down approch,Bottom-Up approch,Big Bang approach
15. Software has more bugs it is due to?
Ans: poor testing
19 what is Static& dynamic testing?
Ans: Static testing did not validate code but finds faults.
Dynamic testing: Black box testing, whitebox testing will comes under dynamic testing

Friday, September 14, 2007

The People Capability Maturity Mode

The People Capability Maturity Mode

The People Capability Maturity ModelSM (P-CMMSM) adapts the maturity framework of the Capability Maturity ModelSM for Software (CMMSM]) [Paulk 95], to managing and developing an organization's work force. The motivation for the P-CMM is to radically improve the ability of software organizations to attract, develop, motivate, organize, and retain the talent needed to continuously improve software development capability. The P-CMM is designed to allow software organizations to integrate work-force improvement with software process improvement programs guided by the CMM. The P-CMM can also be used by any kind of organization as a guide for improving their people-related and work-force practices.

Based on the best current practices in the fields such as human resources and organizational development, the P-CMM provides organizations with guidance on how to gain control of their processes for managing and developing their work force. The P-CMM helps organizations to characterize the maturity of their work-force practices, guide a program of continuous work-force development, set priorities for immediate actions, integrate work-force development with process improvement, and establish a culture of software engineering excellence. It describes an evolutionary improvement path from ad hoc, inconsistently performed practices, to a mature, disciplined development of the knowledge, skills, and motivation of the work force, just as the CMM describes an evolutionary improvement path for the software processes within an organization.

The P-CMM consists of five maturity levels that lay successive foundations for continuously improving talent, developing effective teams, and successfully managing the people assets of the organization. Each maturity level is a well-defined evolutionary plateau that institutionalizes a level of capability for developing the talent within the organization.

The key process areas at Level 2 focus on instilling basic discipline into workforce activities. They are work environment, communication, staffing, performance management, training, and compensation.

The key process areas at Level 3 address issues surrounding the identification of the organization's primary competencies and aligning its people management activities with them. They are knowledge and skills analysis, workforce planning, competency development, career development, competency-based practices, and participatory culture.

The key process areas at Level 4 focus on quantitatively managing organizational growth in people management capabilities and in establishing competency-based teams. They are mentoring, team building, team-based practices, organizational competency management, and organizational performance alignment.

The key process areas at Level 5 cover the issues that address continuous improvement of methods for developing competency, at both the organizational and the individual level. They are personal competency development, coaching, and continuous workforce innovation.

Standards and Guidelines

Standards and Guidelines


On this page i provide access to a number of Standards and Guidelines that pertain either specifically to the HCI community, or in a more broader sense to HCI-related topics. Additional information on HCI standards can be found from many of the key standards organizations such as ANSI, IEC, ISO and the IEEE. For those interested in obtaining hard copies of specific government and industry standards, the Document Centre is a good place to visit.

There are five major sections within the Standards and Guidelines page. Section 1, General, details links that contain fairly generic content about HCI and Usability Engineering. The next section, Web Based, contains items of interest to those developing applications and web pages for the world wide web. The third section, Operating Systems, contains HCI guidelines specific to individual operating systems and environments. Section 4, Gov't and International, provides links to standards organizations within the US government and the international community. Finally, NASA, contains a few links to standards developed by NASA which have application to more "earthly based" development projects.

Usability

Usability is the combination of fitness for purpose, ease of use, and ease of learning that makes a product effective. Usability testing focuses on determining if the product is easy to learn, satisfying to use and contains the functionality that the users desire. The movement towards usability testing stimulated the evolution of Usability Labs. Many forms of usability testing have been tried from discount usability engineering, field tests to competitive usability testing. Apart from research and development of various testing methods there have been development in the field of automated tools for evaluation of interface designs against usability guidelines. Some of the tools are DRUM, WebCAT, WebSAT etc.

This page provides informative links to other sources on World Wide Web, focusing on issues of Usability Testing Methods and Tools. The Page is organized in four subsections. The first section includes links to Comprehensive sites that cover various aspects of usability including Testing Methods and Tools. The second section covers sites on Testing Methods e.g. Heuristic, surveys etc. The third section contains links to Testing Tools including automated tools and testing labs. The fourth section presents links to some case studies. Lastly, we credit the people who maintain this page.

Testing Methodology

Testing Methodology


Man and a woman working on computerWe begin the testing process by developing a comprehensive plan to test the general functionality and special features on a variety of platform combinations. Strict quality control procedures are used. The process verifies that the application meets the requirements specified in the system requirements document and is bug free. At the end of each testing day, the team prepares a summary of completed and failed tests. Our programmers address any identified issues, and the application is resubmitted to the testing team until every item is resolved. All changes and retesting are tracked through spreadsheets available to both the testing and programming teams. Applications are not allowed to launch until all identified problems are fixed. A report is prepared at the end of testing to show exactly what was tested and to list the final outcomes.

Our software testing methodology is applied in three distinct phases: unit testing, system testing, and acceptance testing.

* Unit Testing—The programmers conduct unit testing during the development phase. Programmers can test their specific functionality individually or with other units. However, unit testing is designed to test small pieces of functionality rather than the system as a whole. This allows the programmers to conduct the first round of testing to eliminate bugs before they reach the testing staff.
* System Testing—The system is tested as a complete, integrated system. System testing first occurs in the development environment but eventually is conducted in the production environment. Dedicated testers, project managers, or other key project staff perform system testing. Functionality and performance testing are designed to catch bugs in the system, unexpected results, or other ways in which the system does not meet the stated requirements. The testers create detailed scenarios to test the strength and limits of the system, trying to break it if possible. Editorial reviews not only correct typographical and grammatical errors, but also improve the system’s overall usability by ensuring that on-screen language is clear and helpful to users. Accessibility reviews ensure that the system is accessible to users with disabilities.
* Acceptance Testing—The software is assessed against the requirements defined in the system requirements document. The user or client conducts the testing in the production environment. Successful acceptance testing is required before client approval can be received.

Thursday, September 13, 2007

History -The spiral model

History -The spiral model
The spiral model was defined by Barry Boehm in his 1988 article A Spiral Model of Software Development and Enhancement. This model was not the first model to discuss iterative development, but it was the first model to explain why the iteration matters. As originally envisioned, the iterations were typically 6 months to 2 years long. Each phase starts with a design goal and ends with the client (who may be internal) reviewing the progress thus far. Analysis and engineering efforts are applied at each phase of the project, with an eye toward the end goal of the project
The Spiral Model
DEFINITION - The spiral model, also known as the spiral lifecycle model, is a systems development method (SDM) used in information technology (IT). This model of development combines the features of the prototyping model and the waterfall model. The spiral model is intended for large, expensive, and complicated projects.
The steps in the spiral model can be generalized as follows:
1. The new system requirements are defined in as much detail as possible. This usually involves interviewing a number of users representing all the external or internal users and other aspects of the existing system.
2. A preliminary design is created for the new system.
3. A first prototype of the new system is constructed from the preliminary design. This is usually a scaled-down system, and represents an approximation of the characteristics of the final product.
4. A second prototype is evolved by a fourfold procedure: (1) evaluating the first prototype in terms of its strengths, weaknesses, and risks; (2) defining the requirements of the second prototype; (3) planning and designing the second prototype; (4) constructing and testing the second prototype.
5. At the customer's option, the entire project can be aborted if the risk is deemed too great. Risk factors might involve development cost overruns, operating-cost miscalculation, or any other factor that could, in the customer's judgment, result in a less-than-satisfactory final product.
6. The existing prototype is evaluated in the same manner as was the previous prototype, and, if necessary, another prototype is developed from it according to the fourfold procedure outlined above.
7. The preceding steps are iterated until the customer is satisfied that the refined prototype represents the final product desired.
8. The final system is constructed, based on the refined prototype.
9. The final system is thoroughly evaluated and tested. Routine maintenance is carried out on a continuing basis to prevent large-scale failures and to minimize downtime.
Applications
For a typical shrink-wrap application, the spiral model might mean that you have a rough-cut of user elements (without the polished / pretty graphics) as an operable application, add features in phases, and, at some point, add the final graphics.
The spiral model is used most often in large projects. For smaller projects, the concept of agile software development is becoming a viable alternative. The US military has adopted the spiral model for its Future Combat Systems program.
Advantages
• Estimates (i.e. budget, schedule, etc.) get more realistic as work progresses, because important issues are discovered earlier.
• It is more able to cope with the (nearly inevitable) changes that software development generally entails.
• Software engineers (who can get restless with protracted design processes) can get their hands in and start working on a project earlier.

History of the waterfall model

History of the waterfall model
In 1970 Royce proposed what is presently referred to as the waterfall model as an initial concept, a model which he argued was flawed (Royce 1970). His paper then explored how the initial model could be developed into an iterative model, with feedback from each phase influencing subsequent phases, similar to many methods used widely and highly regarded by many today. It is only the initial model that received notice; his own criticism of this initial model has been largely ignored. The "waterfall model" quickly came to refer not to Royce's final, iterative design, but rather to his purely sequentially ordered model. This article will use this popular meaning of the phrase waterfall model. For an iterative model similar to Royce's final vision, see the spiral model.
Despite Royce's intentions for the waterfall model to be modified into an iterative model, use of the "waterfall model" as a purely sequential process is still popular, and, for some, the phrase "waterfall model" has since come to refer to any approach to software creation which is seen as inflexible and non-iterative. Those who use the phrase waterfall model pejoratively for non-iterative models that they dislike usually see the waterfall model itself as naive and unsuitable for a "iterative" process.
Usage of the waterfall model


The unmodified "waterfall model". Progress flows from the top to the bottom, like a waterfall.
In Royce's original waterfall model, the following phases are followed perfectly in order:
1. Requirements specification
2. Design
3. Construction (aka: implementation or coding)
4. Integration
5. Testing and debugging (aka: validation)
6. Installation
7. Maintenance
To follow the waterfall model, one proceeds from one phase to the next in a purely sequential manner. For example, one first completes "requirements specification" — they set in stone the requirements of the software. (Example requirements for Wikipedia may be "Wikipedia allows anonymous editing of articles; Wikipedia enables people to search for information", although real requirements specifications will be much more complex and detailed.) When the requirements are fully completed, one proceeds to design. The software in question is designed and a "blueprint" is drawn for implementers (coders) to follow — this design should be a plan for implementing the requirements given. When the design is fully completed, an implementation of that design is made by coders. Towards the later stages of this implementation phase, disparate software components produced by different teams are integrated. (For example, one team may have been working on the "web page" component of Wikipedia and another team may have been working on the "server" component of Wikipedia. These components must be integrated together to produce the whole system.) After the implementation and integration phases are complete, the software product is tested and debugged; any faults introduced in earlier phases are removed here. Then the software product is installed, and later maintained to introduce new functionality and remove bugs.
Thus the waterfall model maintains that one should move to a phase only when its preceding phase is completed and perfected. Phases of development in the waterfall model are thus discrete, and there is no jumping back and forth or overlap between them.
However, there are various modified waterfall models (including Royce's final model) that may include slight or major variations upon this process.
Arguments for the waterfall model
Time spent early on in software production can lead to greater economy later on in the software lifecycle; that is, it has been shown many times that a bug found in the early stages of the production lifecycle (such as requirements specification or design) is more economical (cheaper in terms of money, effort and time) to fix than the same bug found later on in the process. ([McConnell 1996], p. 72, estimates that "a requirements defect that is left undetected until construction or maintenance will cost 50 to 200 times as much to fix as it would have cost to fix at requirements time.") This should be obvious to some people; if a program design is impossible to implement, it is easier to fix the design at the design stage than to realize months down the track when program components are being integrated that all the work done so far has to be scrapped because of a broken design.
This is the central idea behind Big Design Up Front (BDUF) and the waterfall model - time spent early on making sure that requirements and design are absolutely correct is very useful in economic terms (it will save you much time and effort later). Thus, the thinking of those who follow the waterfall process goes, one should make sure that each phase is 100% complete and absolutely correct before proceeding to the next phase of program creation. Program requirements should be set in stone before design is started (otherwise work put into a design based on "incorrect" requirements is wasted); the programs design should be perfect before people begin work on implementing the design (otherwise they are implementing the "wrong" design and their work is wasted), etcetera.
A further argument for the waterfall model is that it places emphasis on documentation (such as requirements documents and design documents) as well as source code. More "agile" methodologies can de-emphasize documentation in favour of producing working code - documentation however can be useful as a "partial deliverable" should a project not run far enough to produce any substantial amounts of source code (allowing the project to be resumed at a later date). An argument against agile development methods, and thus partly in favour of the waterfall model, is that in agile methods project knowledge is stored mentally by team members. Should team members leave, this knowledge is lost, and substantial loss of project knowledge may be difficult for a project to recover from. Should a fully working design document be present (as is the intent of Big Design Up Front and the waterfall model) new team members or even entirely new teams should theoretically be able to bring themselves "up to speed" by reading the documents themselves. With that said, agile methods do attempt to compensate for this. For example, extreme programming (XP) advises that project team members should be "rotated" through sections of work in order to familiarize all members with all sections of the project (allowing individual members to leave without carrying important knowledge with them).
As well as the above, some prefer the waterfall model for its simple and arguably more disciplined approach. Rather than what the waterfall adherent sees as "chaos" the waterfall model provides a structured approach; the model itself progresses linearly through discrete, easily understandable and explainable "phases" and is thus easy to understand; it also provides easily markable "milestones" in the development process. It is perhaps for this reason that the waterfall model is used as a beginning example of a development model in many software engineering texts and courses.
It is argued that the waterfall model and Big Design Up Front in general can be suited to software projects which are stable (especially those projects with unchanging requirements, such as with "shrink wrap" software) and where it is possible and likely that designers will be able to fully predict problem areas of the system and produce a correct design before implementation is started. The waterfall model also requires that implementers follow the well made, complete design accurately, ensuring that the integration of the system proceeds smoothly.
The waterfall model is widely used, including by such large software development houses as those employed by the US Department of Defense and NASA (see "the waterfall model") and upon many large government projects (see "the standard waterfall model" on the Internet Archive). Those who use such methods do not always formally distinguish between the "pure" waterfall model and the various modified waterfall models, so it can be difficult to discern exactly which models are being used to what extent.
Steve McConnell sees the two big advantages of the pure waterfall model as producing a "highly reliable system" and one with a "large growth envelope", but rates it as poor on all other fronts. On the other hand, he views any of several modified waterfall models (described below) as preserving these advantages while also rating as "fair to excellent" on "work[ing] with poorly understood requirements" or "poorly understood architecture" and "provid[ing] management with progress visibility", and rating as "fair" on "manag[ing] risks", being able to "be constrained to a predefined schedule", "allow[ing] for midcourse corrections", and "provid[ing] customer with progress visibility". The only criterion on which he rates a modified waterfall as poor is that it requires sophistication from management and developers. (Rapid Development, 156)
Criticism of the waterfall model
The waterfall model however is argued by many to be a bad idea in practice, mainly because of their belief that it is impossible to get one phase of a software product's lifecycle "perfected" before moving on to the next phases and learning from them (or, at least, the belief that this is impossible for any non-trivial program). For example, clients may not be aware of exactly what requirements they want before they see a working prototype and can comment upon it; they may change their requirements constantly, and program designers and implementers may have little control over this. If clients change their requirements after a design is finished, that design must be modified to accommodate the new requirements, invalidating quite a good deal of effort if overly large amounts of time have been invested into "Big Design Up Front". (Thus, methods opposed to the naive waterfall model--such as those used in Agile software development--advocate less reliance on a fixed, static requirements document or design document). Designers may not (or, more likely, cannot) be aware of future implementation difficulties when writing a design for an unimplemented software product. That is, it may become clear in the implementation phase that a particular area of program functionality is extraordinarily difficult to implement. If this is the case, it is better to revise the design than to persist in using a design that was made based on faulty predictions and that does not account for the newly discovered problem areas.
Steve McConnell in Code Complete (a book which criticizes the widespread use of the waterfall model) refers to design as a "wicked problem" - a problem whose requirements and limitations cannot be entirely known before completion. The implication is that it is impossible to get one phase of software development "perfected" before time is spent in "reconnaissance" working out exactly where and what the big problems are.
To quote from David Parnas' "a rational design process: how and why to fake it (PDF)":
“Many of the [system's] details only become known to us as we progress in the [system's] implementation. Some of the things that we learn invalidate our design and we must backtrack.”
The idea behind the waterfall model may be "measure twice; cut once", and those opposed to the waterfall model argue that this idea tends to fall apart when the problem being measured is constantly changing due to requirement modifications and new realizations about the problem itself. The idea behind those who object to the waterfall model may be "time spent in reconnaissance is seldom wasted".
In summary, the criticisms of a non-iterative development approach (such as the waterfall model) are as follows:
• Many software projects must be open to change due to external factors; the majority of software is written as part of a contract with a client, and clients are notorious for changing their stated requirements. Thus the software project must be adaptable, and spending considerable effort in design and implementation based on the idea that requirements will never change is neither adaptable nor realistic in these cases.
• Unless those who specify requirements and those who design the software system in question are highly competent, it is difficult to know exactly what is needed in each phase of the software process before some time is spent in the phase "following" it. That is, feedback from following phases is needed to complete "preceding" phases satisfactorily. For example, the design phase may need feedback from the implementation phase to identify problem design areas. The counter-argument for the waterfall model is that experienced designers may have worked on similar systems before, and so may be able to accurately predict problem areas without time spent prototyping and implementing.
• Constant testing from the design, implementation and verification phases is required to validate the phases preceding them. Constant "prototype design" work is needed to ensure that requirements are non-contradictory and possible to fulfill; constant implementation is needed to find problem areas and inform the design process; constant integration and verification of the implemented code is necessary to ensure that implementation remains on track. The counter-argument for the waterfall model here is that constant implementation and testing to validate the design and requirements is only needed if the introduction of bugs is likely to be a problem. Users of the waterfall model may argue that if designers (et cetera) follow a disciplined process and do not make mistakes that there is no need for constant work in subsequent phases to validate the preceding phases.
• Frequent incremental builds (following the "release early, release often" philosophy) are often needed to build confidence for a software production team and their client.
• It is difficult to estimate time and cost for each phase of the development process without doing some "recon" work in that phase, unless those estimating time and cost are highly experienced with the type of software product in question.
• The waterfall model brings no formal means of exercising management control over a project and planning control and risk management are not covered within the model itself.
• Only a certain number of team members will be qualified for each phase; thus to have "code monkeys" who are only useful for implementation work do nothing while designers "perfect" the design is a waste of resources. A counter-argument to this is that "multiskilled" software engineers should be hired over "specialized" staff.
Modified waterfall models
In response to the perceived problems with the "pure" waterfall model, many modified waterfall models have been introduced. These models may address some or all of the criticisms of the "pure" waterfall model. Many different models are covered by Steve McConnell in the "lifecycle planning" chapter of his book Rapid Development: Taming Wild Software Schedules.
While all software development models will bear at least some similarity to the waterfall model, as all software development models will incorporate at least some phases similar to those used within the waterfall model, this section will deal with those closest to the waterfall model. For models which apply further differences to the waterfall model, or for radically different models seek general information on the software development process.
] Royce's final model
Royce's final model, his intended improvement upon his initial "waterfall model", illustrated that feedback could (should, and often would) lead from code testing to design (as testing of code uncovered flaws in the design) and from design back to requirements specification (as design problems may necessitate the removal of conflicting or otherwise unsatisfiable / undesignable requirements). In the same paper Royce also advocated large quantities of documentation, doing the job "twice if possible" (a sentiment similar to that of Fred Brooks, famous for writing The Mythical Man Month, an influential book in software project management, who advocated planning to "throw one away"), and involving the customer as much as possible—now the basis of participatory design and of User Centred Design, a central tenet of Extreme Programming.
Overlapping stages, such as the requirements stage and the design stage, make it possible to integrate feedback from the design phase into the requirements. However, overlapping stages can make it difficult to know when you are finished with a given stage. Consequently, progress is more difficult to track
The "sashimi" model
The sashimi model (so called because it features overlapping phases, like the overlapping fish of Japanese sashimi) was originated by Peter DeGrace. It is sometimes simply referred to as the "waterfall model with overlapping phases" or "the waterfall model with feedback". Since phases in the sashimi model overlap, information of problem spots can be acted upon during phases of the waterfall model that would typically "precede" others in the pure waterfall model. For example, since the design and implementation phases will overlap in the sashimi model, implementation problems may be discovered during the "design and implementation" phase of the development process. This helps alleviate many of the problems associated with the Big Design Up Front philosophy of the waterfall model.
See also
• Agile software development
• Big Design Up Front
• Chaos model
• Iterative and incremental development
• Rapid application development
• Software development process
• Spiral model
• System Development Methodology, a type of waterfall model
• V-model
References
This article was originally based on material from the Free On-line Dictionary of Computing, which is licensed under the GFDL.
• McConnell, Steve (2006). Software Estimation: Demystifying the Black Art. Microsoft Press. ISBN 0-7356-0535-1.
• McConnell, Steve (2004). Code Complete, 2nd edition. Microsoft Press. ISBN 1-55615-484-4.
• McConnell, Steve (1996). Rapid Development: Taming Wild Software Schedules. Microsoft Press. ISBN 1-55615-900-5.
• Parnas, David, A rational design process and how to fake it (PDF) An influential paper which criticises the idea that software production can occur in perfectly discrete phases.
• Royce, Winston (1970), "Managing the Development of Large Software Systems", Proceedings of IEEE WESCON 26(August): 1-9.
• Joel Spolsky on Big Design Up Front
• Joel Spolsky - "daily builds are your friend"
• "Why people still believe in the waterfall model"
• The standard waterfall model for systems development NASA webpage, archived on Internet Archive March 10, 2005.
• Parametric Cost Estimating Handbook, NASA webpage based on the waterfall model, archived on Internet Archive March 8,2005.