Software testing is a process of evaluating a system by manual or automatic means and verifies that it satisfies specific requirements or identifies difference between the expected and actual result.
What is the testing process?
: Verifying that an input data produce the expected output.
: Testing process is not only verifying the expected output, but also the expected behavior.
: Testing is the process of locating errors.
Testing types:
* Functional testing: we are checking the behavior of the software.
* Non-functional testing: we are checking the performance, usability, volume, security.
Testing techniques: * White box * Black box
(Why)Software testing is important as it may cause mission failure. Impact on operational performance and reliability if not done properly. Effective software testing helps to deliver quality software products.
What is quality: Quality is defined as meeting the client’s requirements in the first time and every time. Quality software is reasonably bug-free, delivered on time and within budget, meets requirements, expectations and is maintainable.
Quality Assurance: Quality assurance is a planned and systematic set of activities necessary to provide adequate confidence that products and services will confirm to specified requirements and meet user needs.
Quality Control: The product quality is compared with applicable standards and action taken when non-conformance is detected.
What is the difference between testing and quality assurance (QA)?
Testing is a skill of checking the software against user requirement.
QA (Quality Assurance) is the team that provides the guidelines, standard, planning to the software company and it will verify the working methodology of the software company.
Software process specifies a method of developing software.
Software project is a development project in which a software process is used.
Software product is the outcome of a software project.
Defect : Deviation from the requirements.
: A failure to meet the acceptance criteria of expected result.
Categories of Defects:
1. Variance from the product specifications.
2. Variance from client expectation.
Verification: "Are we building the product right”.
The software should conform to its specification.
Validation: "Are we building the right product”.
The software should do what the user really requires.
What methodologies have you used to develop test cases?
Equivalence Partioning, Boundary Value Analysis, Error Guessing.
Defect Severity determines the defect's effect on the application where as Defect Priority determines the defect urgency of repair.
Severity is given by Testers and Priority by Developers
Traceability Matrix:
Traceability Matrix: Mapping relationship between requirements & test cases.
We use traceability matrix to check whether all the Requirements are tested or not?
We start prepare traceability matrix in the information Gathering phase itself. For each requirement, we have requirement id, description. Priority. We will identify the test conditions for that requirement and map the test cases for that requirement. By using this mapping, we can ensure that how many test cases each requirement is passed. If for each requirement it meets the success criteria, then we can stop testing of the project.
Generally traceability matrix consists of:
Req_ID | Description | Priority | Test Conditions | TestCase_ID | Phase of testing
Traceability This document helps to identify, if the Test case document contains tests for all the identified unit functions from the design specification. From this matrix we can collect the percentage of test coverage taking into account the percentage of functionalities to the total tested and not tested.
Traceability metrix: The gathering the information form the sources and listing them in a particular format as a date to make ensure threat all the required information has been covered.
Traceability matrix: A method used to validate the compliance of a process or product with the requirements for that process or product. The requirements are each listed in a row of the matrix and the columns of the matrix are used to identify how and where each requirement has been addressed.
Description: A table that traces the requirements to the system deliverable component for that stage that responds to the requirement.
Size and Format: For each requirement, identify the component in the current stage that responds to the requirement. The requirement may be mapped to such items as a hardware component, an application unit, or a section of a design specification.
BASELINE TRACEABILITY MATRIX
Description: A table that documents the requirements of the system for use in subsequent stages to confirm that all requirements have been met.
Size and Format: Document each requirement to be traced. The requirement may be mapped to such things as a hardware component, an application unit, or a section of a design specification.
Traceability Matrix Table
Identifier Requirement Priority (e.g., (M)andatory, (D)esirable, or (O)ptional)
Change Requests Module (or Hardware Component, Application Unit, Deliverable Section, e.g., Design Specification)
Software Testing Lifecycle & its phases: STLC
1. TEST PLAN PREPARATION.
2. TEST CASE DESIGN.
3. TEST EXECUTION.
4. TEST LOG PREPARATION.
5. DEFECT TRACKING.
6. TEST REPORT PREPARATION.
TEST PLAN: Test plan is a document that describes the objectives, scope, approach, and focus of a software testing effort. Test plan prepares by referring the approved and new version of FDD and TDD. This document helps the outside or new people to understand about the project. It tells who are included in the project what are the roles assigned to them, what are the testing tools used, what environment is required, what items to be tested and not to be tested, mention the risks and mitigations in testing, what is entry and exit criteria, budget of the project, what should we do when the defect is found. By referring this test plan-testing team starts their work.
How can you develop test plan: to develop test plan first I will study thoroughly about baseline documentation, BRS (business requirement spec), SRS (system requirement spec), object models and data models. Once I know thoroughly about business and functionality about application then I am going to prepare RTM(Requirement tracing matrix) using this we can eliminate duplicate test cases then I am going to prepare test plans, in test plan first I will prepare test cases and related test data for all test cases. Once complete test case and data then I am going to test the application
TEST CASE: A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly. A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results.
Why we write test cases?
The basic objective of writing test cases is to validate the testing coverage of the application.
Severity: The degree of impact that a defect has on the development or operation of a component or system.
Test case analysis means reviewing the test case normally done by team lead or Quality Manager. In this process he / she walkthrough the test cases and compares with the SRS or functional document and also checks whether all major scenarios were considered by test engineer while writing the test case and informs accordingly.
Bucket testing is also called as variable testing or A-B testing. Buckets are our implementation of smart objects. A bucket is a storage unit that contains data. The bucket design goals are: aggregation, intelligence, self-sufficiency, mobility, heterogeneity and archive independence and metadata, as well as the methods for accessing both. Buckets contain 0 or more packages. Packages contain 0 or more elements. Actual data objects are stored as elements, and elements are grouped together in packages within a bucket. Testing this actual data objects, which are stores as elements and number of elements that are grouped together in packages that are within a bucket is known as bucket testing.
What is the difference between SRS and BRS?
SRS: Software Requirements Specification. SRS is prepared by Software Analyst. They implement the BRS in preparation of SRS.
BRS / FS: Business Requirement Specification. BRS is given by Client. It tells the business Logics.
Test Strategy: Test strategy is a general framed structure where as test plan is a specific document which describes all the testing efforts to be achieved the objective defined in the strategy.
Test script: The instructions in a test program. It defines the actions and pass/fail criteria. For example, if the action is "to enter a valid account number," the expected result is that the data are accepted. Entering an invalid number should yield a particular error message
Test scenario: A set of test cases that ensure that the business process flows are tested from end to end. They may be independent tests or a series of tests that follow each other, each dependent on the output of the previous one. The terms "test scenario" and "test case" are often used synonymously.
Test suite: A collection of test scenarios and/or test cases that are related or that may cooperate with each other
CMMI Levels:
Initial: The software process is characterized as ad hoc. Few processes are defined, and success depends on individual effort and heroics.
Repeatable: Basic project management processes are established to track cost, schedule, and functionality. The necessary process discipline is in place to repeat earlier successes on projects with similar applications.
Defined: The software process for both management and engineering activities is documented, standardized, and integrated into a standard software process for the organization. All projects use an approved, tailored version of the organization's standard software process for developing and maintaining software.
Managed: Detailed measures of the software process and product quality are collected. Both the software process and products are quantitatively understood and controlled.
Optimizing: Continuous process improvement is enabled by quantitative feedback from the process and from piloting innovative ideas and technologies
==============================================================================
Acceptance Test: The test performed by users of a new or changed system in order to approve the system and go live. See user acceptance test.
Ad Hoc Testing:
: A testing where the tester tries to break the software by randomly trying functionality of software.
: Informal testing without a test case.
Functional Test: Testing functional requirements of software, such as menus and key commands.
Regression Test: Regression means retesting the unchanged parts of the application. Test cases are re-executed in order to check whether previous functionality of application is working fine and new changes have not introduced any new bugs.
Smoke Test: Testing major functionality of the build application.
Sanity Testing: Brief test of major functional elements of a piece of software to determine if it’s basically operational.
End-to-End testing: Testing a complete application environment in a situation that real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
Security Testing: Testing which confirms that the program can restrict access to authorized personnel and that the authorized personnel can access the functions available to their security level.
Performance testing: is the process of determining the speed or effectiveness of a computer, network, software program or device. This process can involve quantitative tests done in a lab, such as measuring the response time or the number of MIPS (millions of instructions per second) at which a system functions. Also know as "Load Testing".
Alpha Testing: The Alpha Testing is conducted at the developer sites and in a controlled environment by the end user of the software.
Beta Testing: Testing the application after the installation at the client place: Testing by end users. Follows alpha testing.
What is the difference between structural and functional testing?
Structural is a "white box" testing and based on the algorithm or code.
Functional testing is a "black box" (behavioral) testing where the tester verifies the functional specification.
Compatibility Testing: In Compatibility testing we can test that software is compatible with other elements of system.
Data Driven Testing: Testing in which the action of a test case is parameterized by externally defined data values, maintained as a file or spreadsheet. A common technique in Automated Testing.
Database testing: Database testing means test engineer should test the data integrity, data accessing, query retrieving, modifications, updation and deletion etc.
Endurance Testing: Checks for memory leaks or other problems that may occur with prolonged execution.
Gorilla Testing: Testing one particular module, functionality heavily.
Monkey Testing: Testing a system or an Application on the fly, i.e. just few tests here and there to ensure the system or an application does not crash out.
Positive Testing: Testing aimed at showing software works. Also known as "test to pass".
Negative Testing: Tests aimed at showing that a component or system does not work.
Negative testing is related to the testers’ attitude rather than a specific test approach or test design technique, e.g. testing with invalid input values or exceptions.
Passive Test: Monitoring the results of a running system without introducing any special test data.
Recovery Test: Testing a system's ability to recover from a hardware or software failure.
Stress Testing: Stress testing is a form of testing that is used to determine the stability of a given system or entity. It involves testing beyond normal operational capacity, often to a breaking point, in order to observe the results.
Soak Testing: Running a system at high load for a prolonged period of time. For example, running several times more transactions in an entire day (or night) than would be expected in a busy day, to identify and performance problems that appear after a large number of transactions have been executed.
Usability testing: Usability testing is for user friendliness.
User acceptance testing: User acceptance testing is determining if software is satisfactory to an end-user or customer.
Volume Testing: We can perform the Volume testing, where the system is subjected to large volume of data.
What is the difference between testing and debugging?
Big difference is that debugging is conducted by a programmer and the programmer fixes the errors during debugging phase. Tester never fixes the errors, but rather finds them and returns to programmer.
What is the difference between stress & Load Testing?
Stress testing: for all types of applications deny the resources it needs. Like: application is developed for 256MB Ram or higher. You test on 64Mb Ram see it fails and fails safely
Load testing: for client/server applications {2-tier or higher}
your reqt: 5000 users at a time. Check throughput of application, if achieved, load testing for required load is done. If not tune the application developed and/or servers.
BUG LIFE CYCLE: Description of Various Stages:
1. New: When the bug is posted for the first time, its state will be “NEW”. This means that the bug is not yet approved.
2. Open: After a tester has posted a bug, the lead of the tester approves that the bug is genuine and he changes the state as “OPEN”.
3. Assign: Once the lead changes the state as “OPEN”, he assigns the bug to corresponding developer or developer team. The state of the bug now is changed to “ASSIGN”.
4. Retest: Once the developer fixes the bug, he has to assign the bug to the testing team for next round of testing. Before he releases the software with bug is fixed, he changes the state of bug to “RETEST”. It specifies that the bug has been fixed and is released to testing team.
5. Deferred: The bug, changed to deferred state means the bug is expected to be fixed in next releases. The reasons for changing the bug to this state have many factors. Some of them are priority of the bug may be low, lack of time for the release or the bug may not have major effect on the software.
6. Rejected: If the developer feels that the bug is not genuine, he rejects the bug. Then the state of the bug is changed to “REJECTED”.
7. Duplicate: If the bug is repeated twice or the two bugs mention the same concept of the bug, then one bug status is changed to “DUPLICATE”.
8. Verified: Once the bug is fixed and the status is changed to “RETEST”, the tester tests the bug. If the bug is not present in the software, he approves that the bug is fixed and changes the status to CLOSED”.
9. Reopened: If the bug still exists even after the bug is fixed by the developer, the tester changes the status to “REOPENED”. The bug Travel across the life cycle once again.
10. Closed: Once the bug is fixed, it is tested by the tester. If the tester feels that the bug no longer exists in the software, he changes the status of the bug to “CLOSED”. This state means that the bug is fixed, tested and approved
What are all the basic elements in a defect report?
Defect Id, explanation, Module in which the bug is found, Test case Id, Bug Found by, Priority, Severity, STATUS (IE OPEN OR CLOSED).
Elements in the defect report
Defect id Severity Suggestion Status
Attachments (Comment) Program name Release Sign
How to reproduce Version Date Defect Type
Software Testing actual working process in the companies.
Whenever we get any new project there is initial project familiarity meeting. In this meeting we basically discuss on who is client? What is project duration and when is delivery? Who is involved in project i.e. manager, Tech leads, QA leads, developers, testers etc..?
From the SRS (software requirement specification) project plan is developed. The responsibility of testers is to create software test plan from this SRS and project plan. Developers start coding from the design. The project work is divided into different modules and these project modules are distributed among the developers. In meantime testers responsibility is to create test scenario and write test cases according to assigned modules. We try to cover almost all the functional test cases from SRS. The data can be maintained manually in some excel test case templates or bug tracking tools.
When developers finish individual modules, those modules are assigned to testers. Smoke testing is performed on these modules and if they fail this test, modules are reassigned to respective developers for fix. For passed modules manual testing is carried out from the written test cases. If any bug is found that get assigned to module developer and get logged in bug tracking tool. On bug fix tester do bug verification and regression testing of all related modules. If bug passes the verification it is marked as verified and marked as closed. Otherwise above mentioned bug cycle gets repeated.
Different tests are performed on individual modules and integration testing on module integration. These tests include Compatibility testing i.e. testing application on different hardware, OS versions, software platform, different browsers etc. Load and stress testing is also carried out according to SRS. Finally system testing is performed by creating virtual client environment. On passing all the test cases test report is prepared and decision is taken to release the product!
What is difference between client server and Web Testing?
Each one differs in the environment in which they are tested and you will lose control over the environment in which application you are testing, while you move from desktop to web applications.
Desktop application runs on personal computers and work stations, so when you test the desktop application you are focusing on a specific environment. You will test complete application broadly in categories like GUI, functionality, Load, and backend i.e DB.
Client server application you have two different components to test. Application is loaded on server machine while the application exe on every client machine. You will test broadly in categories like, GUI on both sides, functionality, Load, client-server interaction, backend. This environment is mostly used in Intranet networks. You are aware of number of clients and servers and their locations in the test scenario.
Web application: Application is loaded on the server whose location may or may not be known and no exe is installed on the client machine, you have to test it on different web browsers. Web applications are supposed to be tested on different browsers and OS platforms so broadly Web application is tested mainly for browser compatibility and operating system compatibility, error handling, static pages, backend testing and load testing.
Keep in mind that even the difference exist in these three environments, the basic quality assurance and testing principles remains same and applies to all.
What is the Test Server?
Usually 3 types of servers will be there in a project
1. Development Server 2. Test Server 3. Production Server
Product will be developed on Development Server. Test server is the server on which the product will be tested with the test data. Once the product is certified on the test server, application will be deployed on the production server.
When to stop testing? a) When all the requirements are adequately executed successfully through test cases
b) Bug reporting rate reaches a particular limit
c) The test environment no more exists for conducting testing
d) The scheduled time for testing is over
e) The budget allocation for testing is over.
Functional Testing Vs Non-Functional Testing
S. No
Functional Testing
Non-Functional Testing
1 Testing developed application against business requirements.
Functional testing is done using the functional specifications provided by the client or by using the design specifications like use cases provided by the design team. Testing the application based on the clients and performance
requirement.
Non-Functioning testing is done based on the requirements and test scenarios defined by the client.
2 Functional testing covers
• Unit Testing
• Smoke testing / Sanity testing
• Integration Testing (Top Down,
Bottom up Testing)
• Interface & Usability Testing
• System Testing
• Regression Testing
• Pre User Acceptance Testing
(Alpha & Beta)
• User Acceptance Testing
• White Box & Black Box Testing
• Globalization & Localization
Testing Non-Functional testing covers
• Load and Performance Testing
• Ergonomics Testing
• Stress & Volume Testing
• Compatibility & Migration Testing
• Data Conversion Testing
• Security / Penetration Testing
• Operational Readiness Testing
• Installation Testing
• Security Testing (Application
Security, Network Security,
System Security)
Defect Severity determines the defect's effect on the application where as Defect Priority determines the defect urgency of repair.
Severity is given by Testers and Priority by Developers: The three Levels are:
1. Critical. 2. Major. 3. Minor.
1. Critical - The defect results in the failure of the complete software system, of a subsystem, or of a software unit (program or module) within the system.
2. Major - The defect results in the failure of the complete software system, of a subsystem, or of a software unit (program or module) within the system. There is no way to make the failed component(s); however, there are acceptable processing alternatives which will yield the desired result.
3. Minor - The defect does not cause a failure, does not impair usability, and the desired processing results are easily obtained by working around the defect.
Defect Severity determines the defect's effect on the application where as Defect Priority determines the defect urgency of repair.
Severity is given by Testers and Priority by Developers
1. High Severity & Low Priority: For example an application which generates some banking related reports weekly, monthly, quarterly & yearly by doing some calculations. If there is a fault while calculating yearly report. This is a high severity fault but low priority because this fault can be fixed in the next release as a change request.
2. High Severity & High Priority: In the above example if there is a fault while calculating weekly report. This is a high severity and high priority fault because this fault will block the functionality of the application immediately within a week. It should be fixed urgently.
3. Low Severity & High Priority: If there is a spelling mistake or content issue on the homepage of a website which has daily hits of lakhs. In this case, though this fault is not affecting the website or other functionalities but considering the status and popularity of the website in the competitive market it is a high priority fault.
4. Low Severity & Low Priority: If there is a spelling mistake on the pages which has very less hits throughout the month on any website. This fault can be considered as low severity and low priority.
Severity Levels can be defined as follow:
S1 - Urgent/Showstopper: Like system crash or error message forcing to close the window.
Tester's ability to operate the system either totally (System Down), or
almost totally, affected. A major area of the users system is affected by the incident and it is significant to business processes.
S2 – Medium: Exist like when a problem is required in the specs but tester can go on with testing.
Incident affects an area of functionality but there is a work-around which negates impact to business process.
This is a problem that:
a) Affects a more isolated piece of functionality.
b) Occurs only at certain boundary conditions.
c) Has a workaround (where "don't do that" might be an acceptable answer to the user).
d) Occurs only at one or two customers. Or is intermittent
S3 – Low: This is for minor problems, such as failures at extreme boundary conditions that are unlikely to occur in normal use, or minor errors in layout/formatting. Problems do not impact use of the product in any substantive way. These are incidents that are cosmetic in nature and of no or very low impact to business processes.
In addition to the defect severity level, defect priority level can be used with severity categories to determine the immediacy of repair.
A five repair priority scale has also be used in common testing practice. The levels are:
1. Resolve Immediately - Further development and/or testing cannot occur until the defect has been repaired. The system cannot be used until the repair has been effected.
2. Give High Attention - The defect must be resolved as soon as possible because it is impairing development/and or testing activities. System use will be severely affected until the defect is fixed.
3. Normal Queue - The defect should be resolved in the normal course of development activities. It can wait until a new build or version is created.
4. Low Priority - The defect is an irritant which should be repaired but which can be repaired after more serious defect have been fixed.
5. Defer - The defect repair can be put of indefinitely. It can be resolved in a future major system revision or not resolved at all.
No comments:
Post a Comment