Saturday, September 15, 2007

Types of Testing

What's Ad Hoc Testing ?
A testing where the tester tries to break the software by randomly trying functionality of software.

What's the Accessibility Testing ?
Testing that determines if software will be usable by people with disabilities.

What's the Alpha Testing ?
The Alpha Testing is conducted at the developer sites and in a controlled environment by the end user of the software

What's the Beta Testing ?
Testing the application after the installation at the client place.

What is Component Testing ?
Testing of individual software components (Unit Testing).

What's Compatibility Testing ?
In Compatibility testing we can test that software is compatible with other elements of system.

What is Concurrency Testing ?
Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores.

What is Conformance Testing ?
The process of testing that an implementation conforms to the specification on which it is based. Usually applied to testing conformance to a formal standard.

What is Context Driven Testing ?
The context-driven school of software testing is flavor of Agile Testing that advocates continuous and creative evaluation of testing opportunities in light of the potential information revealed and the value of that information to the organization right now.

What is Data Driven Testing ?
Testing in which the action of a test case is parameterized by externally defined data values, maintained as a file or spreadsheet. A common technique in Automated Testing.

What is Conversion Testing ?
Testing of programs or procedures used to convert data from existing systems for use in replacement systems.

What is Dependency Testing ?
Examines an application's requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.

What is Depth Testing ?
A test that exercises a feature of a product in full detail.

What is Dynamic Testing ?
Testing software through executing it. See also Static Testing.

What is Endurance Testing ?
Checks for memory leaks or other problems that may occur with prolonged execution.

What is End-to-End testing ?
Testing a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

What is Exhaustive Testing ?
Testing which covers all combinations of input values and preconditions for an element of the software under test.

What is Gorilla Testing ?
Testing one particular module, functionality heavily.

What is Installation Testing ?
Confirms that the application under test recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.
What is Localization Testing ?
This term refers to making software specifically designed for a specific locality.

What is Loop Testing?
A white box testing technique that exercises program loops.

What is Mutation Testing?
Mutation testing is a method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes ('bugs') and retesting with the original test data/cases to determine if the 'bugs' are detected. Proper implementation requires large computational resources

What is Monkey Testing ?
Testing a system or an Application on the fly, i.e just few tests here and there to ensure the system or an application does not crash out.

What is Positive Testing ?
Testing aimed at showing software works. Also known as "test to pass". See also Negative Testing.
What is Negative Testing?
Testing aimed at showing software does not work. Also known as "test to fail". See also Positive Testing.

What is Path Testing ?
Testing in which all paths in the program source code are tested at least once.

What is Performance Testing ?
Testing conducted to evaluate the compliance of a system or component with specified performance requirements. Often this is performed using an automated test tool to simulate large number of users. Also know as "Load Testing".

What is Ramp Testing?
Continuously raising an input signal until the system breaks down.

What is Recovery Testing?
Confirms that the program recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.

What is the Re-testing testing ?
Retesting- Again testing the functionality of the application.

What is the Regression testing ?
Regression- Check that change in code have not effected the working functionality

What is Sanity Testing ?
Brief test of major functional elements of a piece of software to determine if its basically operational.

What is Scalability Testing ?
Performance testing focused on ensuring the application under test gracefully handles increases in work load.

What is Security Testing ?
Testing which confirms that the program can restrict access to authorized personnel and that the authorized personnel can access the functions available to their security level.

What is Stress Testing ?
Stress testing is a form of testing that is used to determine the stability of a given system or entity. It involves testing beyond normal operational capacity, often to a breaking point, in order to observe the results.

What is Smoke Testing ?
A quick-and-dirty test that the major functions of a piece of software work. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire.

What is Soak Testing ?
Running a system at high load for a prolonged period of time. For example, running several times more transactions in an entire day (or night) than would be expected in a busy day, to identify and performance problems that appear after a large number of transactions have been executed.

What's the Usability testing ?
Usability testing is for user friendliness.

What's the User acceptance testing ?
User acceptance testing is determining if software is satisfactory to an end-user or customer.
What's the Volume Testing ?
We can perform the Volume testing, where the system is subjected to large volume of data.

1 : With thorough testing it is possible to remove all defects from a program prior to delivery to the customer.
a. True
b. False

2 : Which of the following are characteristics of testable software ?
a. observability
b. simplicity
c. stability
d. all of the above

3 : The testing technique that requires devising test cases to demonstrate that each program function is operational is called
a. black-box testing
b. glass-box testing
c. grey-box testing
d. white-box testing

4 : The testing technique that requires devising test cases to exercise the internal logic of a software module is called
a. behavioral testing
b. black-box testing
c. grey-box testing
d. white-box testing

5 : What types of errors are missed by black-box testing and can be uncovered by white-box testing ?
a. behavioral errors
b. logic errors
c. performance errors
d. typographical errors
e. both b and d

6 : Program flow graphs are identical to program flowcharts.
a. True
b. False

7 : The cyclomatic complexity metric provides the designer with information regarding the number of
a. cycles in the program
b. errors in the program
c. independent logic paths in the program
d. statements in the program

8 : The cyclomatic complexity of a program can be computed directly from a PDL representation of an algorithm without drawing a program flow graph.
a. True
b. False

9 : Condition testing is a control structure testing technique where the criteria used to design test cases is that they
a. rely on basis path testing
b. exercise the logical conditions in a program module
c. select test paths based on the locations and uses of variables
d. focus on testing the validity of loop constructs

10 : Data flow testing is a control structure testing technique where the criteria used to design test cases is that they
a. rely on basis path testing
b. exercise the logical conditions in a program module
c. select test paths based on the locations and uses of variables
d. focus on testing the validity of loop constructs

11 : Loop testing is a control structure testing technique where the criteria used to design test cases is that they
a. rely basis path testing
b. exercise the logical conditions in a program module
c. select test paths based on the locations and uses of variables
d. focus on testing the validity of loop constructs

12 : Black-box testing attempts to find errors in which of the following categories
a. incorrect or missing functions
b. interface errors
c. performance errors
d. all of the above
e. none of the above

13 : Graph-based testing methods can only be used for object-oriented systems
a. True
b. False

14 : Equivalence testing divides the input domain into classes of data from which test cases can be derived to reduce the total number of test cases that must be developed.
a. True
b. False

15 : Boundary value analysis can only be used to do white-box testing.
a. True
b. False

16 : Comparison testing is typically done to test two competing products as part of customer market analysis prior to product release.
a. True
b. False

17 : Orthogonal array testing enables the test designer to maximize the coverage of the test cases devised for relatively small input domains.
a. True
b. False

18 : Test case design "in the small" for OO software is driven by the algorithmic detail of
the individual operations.
a. True
b. False

19 : Encapsulation of attributes and operations inside objects makes it easy to obtain object state information during testing.
a. True
b. False

20 : Use-cases can provide useful input into the design of black-box and state-based tests of OO software.
a. True
b. False

21 : Fault-based testing is best reserved for
a. conventional software testing
b. operations and classes that are critical or suspect
c. use-case validation
d. white-box testing of operator algorithms

22 : Testing OO class operations is made more difficult by
a. encapsulation
b. inheritance
c. polymorphism
d. both b and c

23 : Scenario-based testing
a. concentrates on actor and software interaction
b. misses errors in specifications
c. misses errors in subsystem interactions
d. both a and b

24 : Deep structure testing is not designed to
a. examine object behaviors
b. exercise communication mechanisms
c. exercise object dependencies
d. exercise structure observable by the user

25 : Random order tests are conducted to exercise different class instance life histories.
a. True
b. False
26 : Which of these techniques is not useful for partition testing at the class level
a. attribute-based partitioning
b. category-based partitioning
c. equivalence class partitioning
d. state-based partitioning
27 : Multiple class testing is too complex to be tested using random test cases.
a. True
b. False
28 : Tests derived from behavioral class models should be based on the
a. data flow diagram
b. object-relation diagram
c. state diagram
d. use-case diagram
29 : Client/server architectures cannot be properly tested because network load is highly variable.
a. True
b. False
30 : Real-time applications add a new and potentially difficult element to the testing mix
a. performance
b. reliability
c. security
d. time
1. What is the meaning of COSO ?
a. Common Sponsoring Organizations
b. Committee Of Sponsoring Organizations
c. Committee Of Standard Organizations
d. Common Standard Organization
e. None of the above
2. Which one is not key term used in internal control and security
a. Threat
b. Risk Control
c. Vulnerability
d. Exposure
e. None
3. Management is not responsible for an organization internal control system
a. True
b. False

4. Who is ultimate responsible for the internal control system
a. CEO
b. Project Manager
c. Technical Manager
d. Developer
e. Tester
5. Who will provide important oversight to the internal control system
a. Board of Directors
b. Audit Committee
c. Accounting Officers
d. Financial Officers
e. both a & b
f. both c & d
6. The sole purpose of the Risk Control is to avoid risk
a. True
b. False
7. Management control involves limiting access to computer resources
a. True
b. False
8. Software developed by contractors who are not part of the organization is referred to as in sourcing organizations
a. True
b. False
9. Which one is a not tester responsibility?
a. Assure the process for contracting software is adequate
b. Review the adequacy of the contractors test plan
c. Perform acceptance testing on the software
d. Assure the ongoing operation and maintenance of the contracted software
e. None of the above
10. The software tester may or may not be involved in the actual acceptance testing
a. True
b. False
11. In the client systems, testing should focus on performance and compatibility
a. True
b False
12. A database access applications typically consists of following elements except
a. User Interface code
b. Business login code
c. Data-access service code
d. Data Driven code
13. Wireless technologies represent a rapidly emerging area of growth and importance for providing ever-present access to the internet and email.
a. True b. False

14. Acceptance testing involves procedures for identifying acceptance criteria for interim life cycle products and for accepting them.
a. True
b. False
15. Acceptance testing is designed whether or not the software is “fit” for the user to use. The concept of “fit” is important in both design and testing. There are four components of “fit”.
a. True
b. False
16. Acceptance testing occurs only at the end point of the development process; it should be an ongoing activity that test both interim and final products.
a. True
b. False
17. Acceptance requirement that a system must meet can be divided into ________ categories.
a. Two
b. Three
c. Four
d. Five
18. _______ Categories of testing techniques can be used in acceptance testing.
a. Two
b. Three
c. Four
d. Five
19. _____________ define the objectives of the acceptance activities and a plan for meeting them.
a. Project Manager
b. IT Manager
c. Acceptance Manager
d. ICO
20. Software Acceptance testing is the last opportunity for the user to examine the software for functional, interface, performance, and quality features prior to the final acceptance review.
a. True
b. False

Try to answer these questions friends. All these are asked in various interviews.
What is Software Testing ?
What is the Purpose of Testing?
What types of testing do testers perform?
What is the Outcome of Testing ?
What kind of testing have you done ?
What is the need for testing ?
How do you determine, what to be tested ?
How do you go about testing a project ?

What is the Initial Stage of testing ?
What are the various levels of testing ?
What are the Minimum requirements to start testing ?
What is test metrics ?
Why do you go for White box testing, when Black box testing is available ?
What are the entry criteria for Automation testing ?
When to start and Stop Testing ?
What is Quality ?
What is quality assurance ?
What is quality control ?
What is verification ?
What is validation ?
What is SDLC and TDLC ?
What are the Qualities of a Tester ?
What is the relation ship between Quality & Testing ?
What are the types of testing you know and you experienced ?
After completing testing, what would you deliver to the client ?
What is a Test Bed ?
Why do you go for Test Bed ?
What is a Data Guidelines ?
What is Severity and Priority and who will decide what ?
Can Automation testing replace manual testing ? If it so, how ?
What is a test case?
What is a test condition ?
What is the test script ?
What is the test data ?
What is an Inconsistent bug ?
What is the difference between Re-testing and Regression testing ?
What are the different types of testing techniques ?
What are the different types of test case techniques ?
What are the risks involved in testing ?
Differentiate Test bed and Test Environment ?
What ifs the difference between defect, error, bug, failure, fault ?
What is the difference between quality and testing ?
What is the difference between White & Black Box Testing ?
What is the difference between Quality Assurance and Quality Control ?
What is the difference between Testing and debugging ?
What is the difference between bug and defect ?
What is the difference between verification and validation ?
What is the difference between functional spec. and Business requirement specification ?
What is the difference between unit testing and integration testing ?
What is the diff between Volume & Load ?


30 WinRunner Interview Questions

which scripting language used by WinRunner?
WinRunner uses TSL-Test Script Language (Similar to C)
What's the WinRunner ?
WinRunner is Mercury Interactive Functional Testing Tool.
How many types of Run Modes are available in WinRunner?
WinRunner provide three types of Run Modes.
Verify Mode
Debug Mode
Update Mode
What's the Verify Mode?
In Verify Mode, WinRunner compare the current result of application to it's expected result.
What's the Debug Mode?
In Debug Mode, WinRunner track the defects in a test script.
What's the Update Mode?
In Update Mode, WinRunner update the expected results of test script.
How many types of recording modes available in WinRunner?
WinRunner provides two types of Recording Mode:
Context Sensitive
Analog
What's the Context Sensitive recording ?
WinRunner captures and records the GUI objects, windows, keyboard inputs, and mouse click activities through Context Sensitive Recording.
When Context Sensitive mode is to be chosen?
a. The application contains GUI objects
b. Does not require exact mouse movements.
What's the Analog recording?
It captures and records the keyboard inputs, mouse click and mouse movement. It's not captures the GUI objects and Windows.
When Analog mode is to be chosen?
a. The application contains bitmap areas.
b. Does require exact mouse movements.
What are the components of WinRunner ?
a. Test Window: This is a window where the TSL script is generated/programmed.
b. GUI Spy tool : WinRunner lets you spy on the GUI objects by recording the Properties.
Where are stored Debug Result?
Debug Results are always saved in debug folder.

What's WinRunner testing process ?
WinRunner involves six main steps in testing process.
Create GUI map
Create Test
Debug Test
Run Test
View Results
Report Defects
What's the GUI SPY?
You can view the physical properties of objects and windows through GUI SPY.
How many types of modes for organizing GUI map files?
WinRunner provides two types of modes-
Global GUI map files
Per Test GUI map files
What's the contained in GUI map files?
GUI map files stored the information, it learns about the GUI objects and windows.
How does WinRunner recognize objects on the application ?
WinRunner recognize objects on the application through GUI map files.
What's the difference between GUI map and GUI map files ?
The GUI map is actually the sum of one or more GUI map files.
How do you view the GUI map content ?
We can view the GUI map content through GUI map editor.
What's the checkpoint?
Checkpoint enables you to check your application by comparing it's expected results of application to actual results.
What's the Execution Arrow?
Execution Arrow indicates the line of script being executed.
What's the Insertion Point ?
Insertion point indicates the line of script where you can edit and insert the text.
What's the Synchronization?
Synchronization is enables you to solve anticipated timing problems between test and application.
What's the Function Generator?
Function Generator provides the quick and error free way to add TSL function on the test script.
How many types of checkpoints are available in WinRunner ?
WinRunner provides four types of checkpoints-
GUI Checkpoint
Bitmap Checkpoint
Database Checkpoint
Text Checkpoint
what’s contained in the Test Script?
Test Script contained the Test Script Language.
How do you modify the logical name or the physical description of the objects in GUI map?
We can modify the logical name or the physical description of the objects through GUI map editor.

What are the Data Driven Test ?
When you want to test your application, you may want to check how it performance same operation with the multiple sets of data.
How do you record a Data Driven Test ?
We can create a Data Driven Test through Flat Files, Data Tables, and Database.
How do you clear a GUI map files ?
We can clear the GUI map files through "CLEAR ALL" option.
What are the steps of creating a Data Driven Test ?
Data Driven Testing have four steps-
Creating test
Converting into Data Driven Test
Run Test
Analyze test
What is Rapid Test Script Wizard ?
It performs two tasks.
a. It systematically opens the windows in your application and learns a description of every GUI object. The wizard stores this information in a GUI map file.
b. It automatically generates tests base on the information it learned as it navigated through the application.
What are the different modes in learning an application under Rapid test script wizard ?
a. Express
b. Comprehensive.
What's the extension of GUI map files ?
GUI map files extension is ".gui".
What statement generated by WinRunner when you check any objects ?
Obj_check_gui statement.
What statement generated by WinRunner when you check any windows ?
Win_check_gui statement
What statement generated by WinRunner when you check any bitmap image over the objects ?
Obj_check_bitmap statement
What statement generated by WinRunner when you check any bitmap image over the windows ?
Win_check_bitmap statement
What statement used by WinRunner in Batch Testing ?
"Call" statement.
Which short key is used to freeze the GUI Spy ?
"Ctrl+F3"
How many types of parameter used by WinRunner ?
WinRunner provides three types of Parameter-
Test
Data Driven
Dynamic

How many types of Merging used by WinRunner?
WinRunner used two types of Merging-
Auto
Manual
What's the Virtual Objects Wizard?
Whenever WinRunner is not able to read an object as an objects then it uses the Virtual Objects Wizard.
How do you handle unexpected events and errors ?
WinRunner uses the Exception Handling function to handle unexpected events and errors.
How do you comment your script ?
We comment script or line of the script by inserting "#" at the beginning of script line.
What's the purpose of the Set_Windows command?
Set_Window command set the focus to the specified windows.
How you created your test script ?
Programming.
What's a command to invoke application?
Invoke_application
What do you mean by the logical name of objects ?
Logical name of an objects is determined by it's class but in most cases, the logical name is the label that appear on an objects.
How many types of GUI checkpoints ?
In Winrunner, three types of GUI checkpoints-
For Single Properties
For Objects/Windows
For Multiple Objects
How many types of Bitmap Checkpoints ?
In Winrunner, two types of Bitmap Checkpoints-
For Objects/Windows
For Screen Area
How many types of Database Checkpoints?
In Winrunner, three types of Database Checkpoints-
Default Check
Custom Check
Runtime Record Check
How many types of Text Checkpoints?
In Winrunner, four types of Text Checkpoints-
For Objects/Windows
From Screen Area
From Selection (Web Only)
Web text Checkpoints
What add-ins are available for WinRunner ?
Add-ins are available for Java, ActiveX, WebTest, Siebel, Baan, Stingray, Delphi, Terminal Emulator, Forte, NSDK/Natstar, Oracle and PowerBuilder.

Notes:
* Winrunner generates menu_select_item statement whenever you select any menu items.
* Winrunner generates set_window statement whenever you begin working in new window.
* Winrunner generates edit_set statement whenever you enter keyboard inputs.
* Winrunner generates obj_mouse_click statement whenever you click any object through mouse pointer.
* Winrunner generates obj_wait_bitmap or win_wait_bitmap statements whenever you synchronize the script through objects or windows.
* The ddt_open statement opens the table.
* The ddt_close statement closes the table.
* Winrunner inserts a win_get_text or obj_get_text statements in script for checking the text.
* The button_press statement press the buttons.
* Winrunner generates list_item_select statement whenever you want to select any value in drop-down menu.
* We can compare the two files in Winruuner using the file_compare function.
* tl_step statement used to determine whether section of a test pass or fail.
* Call_Close statement close the test when the test is completed

32 QTP Interview Questions

Full form of QTP ?
Quick Test Professional
What's the QTP ?
QTP is Mercury Interactive Functional Testing Tool.
Which scripting language used by QTP ?
QTP uses VB scripting.
What's the basic concept of QTP ?
QTP is based on two concept-
* Recording
* Playback
How many types of recording facility are available in QTP ?
QTP provides three types of recording methods-
* Context Recording (Normal)
* Analog Recording
* Low Level Recording
How many types of Parameters are available in QTP ?
QTP provides three types of Parameter-
* Method Argument
* Data Driven
* Dynamic

What's the QTP testing process?
QTP testing process consist of seven steps-
* Preparing to recoding
* Recording
* Enhancing your script
* Debugging
* Run
* Analyze
* Report Defects

What's the Active Screen ?
It provides the snapshots of your application as it appeared when you performed a certain steps during recording session.
What's the Test Pane ?
Test Pane contains Tree View and Expert View tabs.
What's Data Table ?
It assists to you about parameterizing the test.
What's the Test Tree ?
It provides graphical representation of your operations which you have performed with your application.
Which all environment QTP supports ?
ERP/ CRM
Java/ J2EE
VB, .NET
Multimedia, XML
Web Objects, ActiveX controls
SAP, Oracle, Siebel, PeopleSoft
Web Services, Terminal Emulator
IE, NN, AOL
How can you view the Test Tree ?
The Test Tree is displayed through Tree View tab.
What's the Expert View ?
Expert View display the Test Script.
Which keyword used for Nornam Recording ?
F3
Which keyword used for run the test script ?
F5
Which keyword used for stop the recording ?
F4
which keyword used for Analog Recording ?
Ctrl+Shift+F4
which keyword used for Low Level Recording ?
Ctrl+Shift+F3

Which keyword used for switch between Tree View and Expert View ?
Ctrl+Tab
What's the Transaction ?
You can measure how long it takes to run a section of your test by defining transactions.
Where you can view the results of the checkpoint?
You can view the results of the checkpoints in the Test Result Window.
What's the Standard Checkpoint?
Standard Checkpoints checks the property value of an object in your application or web page.
Which environment are supported by Standard Checkpoint ?
Standard Checkpoint are supported for all add-in environments.
What's the Image Checkpoint ?
Image Checkpoint check the value of an image in your application or web page.
Which environments are supported by Image Checkpoint ?
Image Checkpoint are supported only Web environment.
What's the Bitmap Checkpoint ?
Bitmap Checkpoint checks the bitmap images in your web page or application.
Which environment are supported by Bitmap Checkpoints ?
Bitmap checkpoints are supported all add-in environment.
What's the Table Checkpoints ?
Table Checkpoint checks the information with in a table.
Which environments are supported by Table Checkpoint ?
Table Checkpoints are supported only ActiveX environment.
What's the Text Checkpoint ?
Text Checkpoint checks that a test string is displayed in the appropriate place in your application or on web page.
Which environment are supported by Test Checkpoint ?
Text Checkpoint are supported all add-in environments
Note:
* QTP records each steps you perform and generates a test tree and test script.
* QTP records in normal recording mode.
* If you are creating a test on web object, you can record your test on one browser and run it on another browser.
* Analog Recording and Low Level Recording require more disk space than normal recording mode.

Web sites are essentially client/server applications - with web servers and 'browser' clients. Consideration should be given to the interactions between html pages, TCP/IP communications, Internet connections, firewalls, applications that run in web pages (such as applets, JavaScript, plug-in applications), and applications that run on the server side (such as cgi scripts, database interfaces, logging applications, dynamic page generators, asp, etc.). Additionally, there are a wide variety of servers and browsers, various versions of each, small but sometimes significant differences between them, variations in connection speeds, rapidly changing technologies, and multiple standards and protocols. The end result is that testing for web sites can become a major ongoing effort. Other considerations might include:
1. What are the expected loads on the server (e.g., number of hits per unit time?), and what kind of performance is required under such loads (such as web server response time, database query response times). What kinds of tools will be needed for performance testing (such as web load testing tools, other tools already in house that can be adapted, web robot downloading tools, etc.)?
2. Who is the target audience? What kind of browsers will they be using? What kind of connection speeds will they by using? Are they intra- organization (thus with likely high connection speeds and similar browsers) or Internet-wide (thus with a wide variety of connection speeds and browser types)?
3. What kind of performance is expected on the client side (e.g., how fast should pages appear, how fast should animations, applets, etc. load and run)?
4 Will down time for server and content maintenance/upgrades be allowed? how much?
What kinds of security (firewalls, encryptions, passwords, etc.) will be required and what is it expected to do? How can it be tested?
How reliable are the site's Internet connections required to be? And how does that affect backup system or redundant connection requirements and testing?
What processes will be required to manage updates to the web site's content, and what are the requirements for maintaining, tracking, and controlling page content, graphics, links, etc.?
Which HTML specification will be adhered to? How strictly? What variations will be allowed for targeted browsers?
Will there be any standards or requirements for page appearance and/or graphics throughout a site or parts of a site??
How will internal and external links be validated and updated? how often?
Can testing be done on the production system, or will a separate test system be required? How are browser caching, variations in browser option settings, dial-up connection variabilities, and real-world internet 'traffic congestion' problems to be accounted for in testing?
How extensive or customized are the server logging and reporting requirements; are they considered an integral part of the system and do they require testing?
How are cgi programs, applets, JavaScript, ActiveX components, etc. to be maintained, tracked, controlled, and tested?
Pages should be 3-5 screens max unless content is tightly focused on a single topic. If larger, provide internal links within the page.
The page layouts and design elements should be consistent throughout a site, so that it's clear to the user that they're still within a site.
Pages should be as browser-independent as possible, or pages should be provided or generated based on the browser-type.
All pages should have links external to the page; there should be no dead-end pages.
The page owner, revision date, and a link to a contact person or organization should be included on each page.

Defect Severity determines the defect's effect on the application where as Defect Priority determines the defect urgency of repair.

Severity is given by Testers and Priority by Developers

1. High Severity & Low Priority: For example an application which generates some banking related reports weekly, monthly, quarterly & yearly by doing some calculations. If there is a fault while calculating yearly report. This is a high severity fault but low priority because this fault can be fixed in the next release as a change request.

2. High Severity & High Priority: In the above example if there is a fault while calculating weekly report. This is a high severity and high priority fault because this fault will block the functionality of the application immediately within a week. It should be fixed urgently.

3. Low Severity & High Priority: If there is a spelling mistake or content issue on the homepage of a website which has daily hits of lakhs. In this case, though this fault is not affecting the website or other functionalities but considering the status and popularity of the website in the competitive market it is a high priority fault.

4. Low Severity & Low Priority: If there is a spelling mistake on the pages which has very less hits throughout the month on any website. This fault can be considered as low severity and low priority.


Testing types
* Functional testing here we are checking the behavior of the software.
* Non-functional testing here we are checking the performance, usability, volume, security.

Testing methodologies
* Static testing : In static testing we are not executing the code
ex: Walk throughs, Inspections, Review

* Dynamic testing: In dynamic testing we are executing the code
ex: Black box , White box

Testing techniques
* White box
* Black box

Testing levels
* Unit testing
* Integration testing
* System testing
* Acceptance testing


Types of Black Box Testing

Functional testing - black-box type testing geared to functional requirements of an application; this type of testing should be done by testers. This doesn't mean that the programmers shouldn't check that their code works before releasing it (which of course applies to any stage of testing.)

System testing - testing is based on overall requirements specifications; covers all combined parts of a system.

Integration testing - testing combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially mainly to client/server and distributed systems.

Incremental integration testing - continuous testing of an application as new functionality is added; requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers.

End-to-end testing - similar to system testing; the 'macro' end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

Sanity testing - typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or destroying databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state.

Regression testing - re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing.

Load testing - testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system's response time degrades or fails.

Stress testing - term often used interchangeably with 'load' and 'performance' testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc.

Performance testing - term often used interchangeably with 'stress' and 'load' testing. Ideally 'performance' testing (and any other 'type' of testing) is defined in requirements documentation or QA or Test Plans.

Usability testing - testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.

Install/uninstall testing - testing of full, partial, or upgrade install/uninstall processes

Recovery testing - testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.

Security testing - testing how well the system protects against unauthorized internal or external access, wilful damage, etc; may require sophisticated testing techniques.

Computability testing - testing how well software performs in a particular hardware/software/operating system/network/etc. environment

Acceptance testing - determining if software is satisfactory to a customer.

Comparison testing - comparing software weaknesses and strengths to competing products

Alpha testing - testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers.

Beta testing - testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers.
1. What is the name of the testing activity that is done on a product before implementing that product into the Client side.
ANS : User Interface Testing

2. what is diff bet QA & QC activities?
ANS : QA Measure the Process Quality where as QC Measures the Product Quality

3. What is path coverage?
ANS : Path coverage testing is the most comprehensive type of testing that a test suite can provide. It can find more bugs in a program, especially those that are caused by data coupling. However, path coverage is a testing level that is very hard to achieve, and usually only small and/or critical sections of code are checked in this way.

4. How many path coverage are possible…………….
ANS :

5. Is performance testing done in unit and system testing?
ANS: YES

6. UAT is done in__________ ______
A. Client's place by the client.
B. Client's place by the tester
C .In the company by the client.

7. Critical Defect can also be termed as --------------
ANS : Show Stopper

8. what is Static& dynamic testing?
ANS : Static means proper testing and Dynamic means Code review or you can say code walkthrough . testing without code execution is static where as with coding being
executed dynamic.
9. what are type of integration testing?
Bottom Up, Top Down, Big Ban and Hybrid.
10. Software has more bugs it is due to?
many things, few are unclear requirements, poor design,coding and
testing, poor quality and mis communication.

11. Starting a Hero Honda Bike

(1) Requirements: starting a bike can be by two ways, you need to
have kick rod or button, before that other requirements like petrol,
engine, accelerator, ignition etc etc.


(2) Usability: shall have flexible kick rod, button system,
speed meter for speed reading.
Try to read the user manual.


(3) Functional: kick the kick rod or push the button, able
to here engine start sound, able to accelerate the bike, able to see
the speed in speed meter.


(4) Non-Functional: Check for performance able to start the bike more
than once immediately, after certain time etc.
What time you are able to start, what is the deviation from kick
rod and button start. This is the process


From this you in a good position to write test cases…..

Follow

(1) Requirements
(2) Usability (GUI + User Manual).
(3) Functionality.
(4) Non-Functionality.
These are the four aspects you need to concentrate on. Practice
writing test cases for anything (Stapler, Glass, Bucket, bike, mouse,
Notepad, paint, calculator, ATM, Key board & anything.

1. What is the name of the testing activity that is
done on a product before implementing that product
into the Client side?
Answer: Beta Testing.
3. A document that properly specifies all
requirements of the customer is_________.
Answer: SRS/BRS
4.The ____________ is used to simulate the
“Lower Interfacing Modules” in the
Top-Down Approach.
Answer: Interim Stubs.
6. What is path coverage?
Answer.
coverage: The degree, expressed as a percentage, to which a specified coverage item has been
exercised by a test suite.

path coverage: The percentage of paths that have been exercised by a test suite. 100% path
coverage implies 100% LCSAJ coverage.

path testing: A white box test design technique in which test cases are designed to execute paths.

7. How many path coverage are possible…………….
Answer: the no of paths are depending on the code, I think there is no limit for that
8. Is performance testing done in unit and system
testing?
Answer: False,performance testing will be done only in system testing
9. UAT is done in
Answer: There are two types of UAT Alpha and Beta
Alpha testing: Done by Real customer at the developer site
Beta Testing: Done by end user in the customer site
10. Critical Defect can also be termed as
Ans: Show Stopper
11.What is the combination of Grey Box Testing.
Ans: I am not sure that there is any predefined statistic for this
12. what is cyclometric No. where we use it?
Ans:
cyclomatic complexity: The number of independent paths through a program. Cyclomatic
complexity is defined as: L – N + 2P, where
- L = the number of edges/links in a graph
- N = the number of nodes in a graph
- P = the number of disconnected parts of the graph (e.g. a called graph and a subroutine)

13. How can it be known when to stop testing?
Ans:When major bugs are fixed in all of the modules
when we reached to enough confidence level that we can release the product but some times there will be no time to cover all the modules at that time we will go for
ad-hoc testing.
14. what are types of integration testing?
Ans:Top-Down approch,Bottom-Up approch,Big Bang approach
15. Software has more bugs it is due to?
Ans: poor testing
19 what is Static& dynamic testing?
Ans: Static testing did not validate code but finds faults.
Dynamic testing: Black box testing, whitebox testing will comes under dynamic testing

No comments: