tag:blogger.com,1999:blog-81802230629411934982024-03-14T07:37:26.749-07:00Software Testing - Rajeshbabu RajamanickamQuality Assurance
“The totality of features or characteristics of a product or service that bear on
its ability to satisfy stated or implied needs.” The
British Standards 4778, and ISO 8402, definitions cite,
“Fitness for Purpose” and “Conformance with
Requirements.”Rajesh Babu Rajamanickamhttp://www.blogger.com/profile/16036650026261051481noreply@blogger.comBlogger106125tag:blogger.com,1999:blog-8180223062941193498.post-78516986112655892542010-09-27T17:54:00.000-07:002010-09-27T17:56:18.936-07:00QA Terms<div align="justify"><span style="font-family:verdana;color:#000000;">Access Modeling<br />Used to verify that data requirements (represented in the form of an entity relationship<br />diagram) support the data demands of process requirements<br />(represented in data flow diagrams and process specifications.)<br /><br />Affinity Diagram<br />A group process that takes large amounts of language data, such as a list<br />developed by brainstorming, and divides it into categories.<br /><br />Application<br />A single software product that may or may not fully support a business function.<br /><br />Audit<br />This is an inspection/assessment activity that verifies compliance with plans,<br />policies, and procedures, and ensures that resources are conserved. Audit is a<br />staff function; it serves as the "eyes and ears" of management.<br /><br />Backlog<br />Work waiting to be done; for IT this includes new systems to be developed<br />and enhancements to existing systems. To be included in the development<br />backlog, the work must have been cost-justified and approved for<br />development.<br /><br />Baseline<br />A quantitative measure of the current level of performance.<br /><br />Benchmarking<br />Comparing your company’s products, services, or processes against best<br />practices, or competitive practices, to help define superior performance of a<br />product, service, or support process.<br /><br />Benefits Realization Test<br />A test or analysis conducted after an application is moved into production to<br />determine whether it is likely to meet the originating business case.<br /><br />Black-box Testing<br />A test technique that focuses on testing the functionality of the program,<br />component, or application against its specifications without knowledge of how<br />the system is constructed; usually data or business process driven.<br /><br />Boundary Value Analysis<br />A data selection technique in which test data is chosen from the “boundaries”<br />of the input or output domain classes, data structures, and procedure<br />parameters. Choices often include the actual minimum and maximum<br />boundary values, the maximum value plus or minus one, and the minimum<br />value plus or minus one.<br /><br />Brainstorming : A group process for generating creative and diverse ideas.<br /><br />Branch Testing :<br />A test method that requires that each possible branch on each decision point be<br />executed at least once.<br /><br />Bug<br />A general term for all software defects or errors.<br /><br />Candidate<br />An individual who has met eligibility requirements for a credential awarded<br />through a certification program, but who has not yet earned that certification<br />through participation in the required skill and knowledge assessment<br />instruments.<br /><br />Cause-Effect Graphing<br />A tool used to derive test cases from specifications. A graph that relates causes<br />(or input conditions) to effects is generated. The information in the graph is<br />converted into a decision table where the columns are the cause-effect<br />combinations. Unique rows represent test cases.<br /><br />Certificant<br />An individual who has earned a credential awarded through a certification<br />program.<br /><br />Certification<br />A voluntary process instituted by a nongovernmental agency by which<br />individual applicants are recognized for having achieved a measurable level of<br />skill or knowledge. Measurement of the skill or knowledge makes certification<br />more restrictive than simple registration, but much less restrictive than formal<br />licensure.<br /><br />Checklists<br />A series of probing questions about the completeness and attributes of an<br />application system. Well-constructed checklists cause evaluation of areas,<br />which are prone to problems. It both limits the scope of the test and directs the<br />tester to the areas in which there is a high probability of a problem.<br /><br />Checkpoint Review<br />Held at predefined points in the development process to evaluate whether<br />certain quality factors (critical success factors) are being adequately addressed<br />in the system being built. Independent experts for the purpose of identifying<br />problems conduct the reviews as early as possible.<br /><br />Checksheet<br />A form used to record data as it is gathered.<br /><br />Client<br />The customer that pays for the product received and receives the benefit from<br />the use of the product.<br /><br />Coaching<br />Providing advice and encouragement to an individual or individuals to<br />promote a desired behavior.<br /><br />Code Comparison<br />One version of source or object code is compared to a second version. The<br />objective is to identify those portions of computer programs that have been<br />changed. The technique is used to identify those segments of an application<br />program that have been altered as a result of a program change.<br /><br />Compiler-Based Analysis<br />Most compilers for programming languages include diagnostics that identify<br />potential program structure flaws. Many of these diagnostics are warning<br />messages requiring the programmer to conduct additional investigation to<br />determine whether or not the problem is real. Problems may include syntax<br />problems, command violations, or variable/data reference problems. These<br />diagnostic messages are a useful means of detecting program problems, and<br />should be used by the programmer.<br /><br />Complete Test Set<br />A test set containing data that causes each element of pre-specified set of<br />Boolean conditions to be true. In addition, each element of the test set causes<br />at least one condition to be true.<br /><br />Completeness<br />The property that all necessary parts of an entity are included. Often, a product<br />is said to be complete if it has met all requirements.<br /><br />Complexity-Based Analysis<br />Based upon applying mathematical graph theory to programs and preliminary<br />design language specification (PDLs) to determine a unit's complexity. This<br />analysis can be used to measure and control complexity when maintainability<br />is a desired attribute. It can also be used to estimate test effort required and<br />identify paths that must be tested.<br /><br />Compliance Checkers<br />A parse program looking for violations of company standards. Statements that<br />contain violations are flagged. Company standards are rules that can be added,<br />changed, and deleted as needed.<br /><br />Condition Coverage<br />A white-box testing technique that measures the number of, or percentage of,<br />decision outcomes covered by the test cases designed. 100% condition<br />coverage would indicate that every possible outcome of each decision had<br />been executed at least once during testing.<br /><br />Configuration Management Tools<br />Tools that are used to keep track of changes made to systems and all related<br />artifacts. These are also known as version control tools.<br /><br />Configuration Testing<br />Testing of an application on all supported hardware and software platforms.<br />This may include various combinations of hardware types, configuration<br />settings, and software versions.<br /><br />Consistency<br />The property of logical coherence among constituent parts. Consistency can<br />also be expressed as adherence to a given set of rules.<br />Consistent Condition Set<br />A set of Boolean conditions such that complete test sets for the conditions<br />uncover the same errors.<br /><br />Control Flow Analysis<br />Based upon graphical representation of the program process. In control flow<br />analysis, the program graph has nodes, which represent a statement or segment<br />possibly ending in an unresolved branch. The graph illustrates the flow of<br />program control from one segment to another as illustrated through branches.<br />The objective of control flow analysis is to determine potential problems in<br />logic branches that might result in a loop condition or improper processing.<br /><br />Conversion Testing<br />Validates the effectiveness of data conversion processes, including field-tofield<br />mapping, and data translation.<br /><br />Correctness<br />The extent to which software is free from design and coding defects (i.e., faultfree).<br />It is also the extent to which software meets its specified requirements<br />and user objectives.<br /><br />Cost of Quality (COQ)<br />Money spent beyond expected production costs (labor, materials, equipment)<br />to ensure that the product the customer receives is a quality (defect free)<br />product. The Cost of Quality includes prevention, appraisal, and correction or<br />repair costs.<br /><br />Coverage-Based Analysis<br />A metric used to show the logic covered during a test session, providing<br />insight to the extent of testing. The simplest metric for coverage would be the<br />number of computer statements executed during the test compared to the total<br />number of statements in the program. To completely test the program<br />structure, the test data chosen should cause the execution of all paths. Since<br />this is not generally possible outside of unit test, general metrics have been<br />developed which give a measure of the quality of test data based on the<br />proximity to this ideal coverage. The metrics should take into consideration<br />the existence of infeasible paths, which are those paths in the program that<br />have been designed so that no data will cause the execution of those paths.<br /><br />Customer<br />The individual or organization, internal or external to the producing<br />organization that receives the product.<br /><br />Cyclomatic Complexity<br />The number of decision statements, plus one.<br /><br />Data Dictionary<br />Provides the capability to create test data to test validation for the defined data<br />elements. The test data generated is based upon the attributes defined for each<br />data element. The test data will check both the normal variables for each data<br />element as well as abnormal or error conditions for each data element.<br /><br />DD (decision-to-decision) path<br />A path of logical code sequence that begins at a decision statement or an entry<br />and ends at a decision statement or an exit.<br /><br />Debugging<br />The process of analyzing and correcting syntactic, logic, and other errors<br />identified during testing.<br /><br />Decision Coverage<br />A white-box testing technique that measures the number of, or percentage of,<br />decision directions executed by the test case designed. 100% decision<br />coverage would indicate that all decision directions had been executed at least<br />once during testing. Alternatively, each logical path through the program can<br />be tested. Often, paths through the program are grouped into a finite set of<br />classes, and one path from each class is tested.<br /><br />Decision Table<br />A tool for documenting the unique combinations of conditions and associated<br />results in order to derive unique test cases for validation testing.<br /><br />Defect<br />Operationally, it is useful to work with two definitions of a defect:<br />1. From the producer's viewpoint a defect is a product requirement<br />that has not been met or a product attribute possessed by a product<br />or a function performed by a product that is not in the statement of<br />requirements that define the product;<br />2. From the customer's viewpoint a defect is anything that causes<br />customer dissatisfaction, whether in the statement of requirements<br />or not.<br /><br />Defect Tracking Tools<br />Tools for documenting defects as they are found during testing and for<br />tracking their status through to resolution.<br /><br />Design Level<br />The design decomposition of the software item (e.g., system, subsystem,<br />program, or module).<br /><br />Desk Checking<br />The most traditional means for analyzing a system or a program. Desk<br />checking is conducted by the developer of a system or program. The process<br />involves reviewing the complete product to ensure that it is structurally sound<br />and that the standards and requirements have been met. This tool can also be<br />used on artifacts created during analysis and design.<br /><br />Driver<br />Code that sets up an environment and calls a module for test.<br /><br />Dynamic Analysis<br />Analysis performed by executing the program code. Dynamic analysis<br />executes or simulates a development phase product, and it detects errors by<br />analyzing the response of a product to sets of input data.<br />Dynamic Assertion<br />A dynamic analysis technique that inserts into the program code assertions<br />about the relationship between program variables. The truth of the assertions is<br />determined as the program executes.<br /><br />Empowerment<br />Giving people the knowledge, skills, and authority to act within their area of<br />expertise to do the work and improve the process.<br /><br />Entrance Criteria<br />Required conditions and standards for work product quality that must be<br />present or met for entry into the next stage of the software development<br />process.<br /><br />Equivalence Partitioning<br />The input domain of a system is partitioned into classes of representative<br />values so that the number of test cases can be limited to one-per-class, which<br />represents the minimum number of test cases that must be executed.<br /><br />Error or Defect<br />1. A discrepancy between a computed, observed, or measured value or<br />condition and the true, specified, or theoretically correct value or<br />condition.<br />2. Human action that results in software containing a fault (e.g., omission<br />or misinterpretation of user requirements in a software specification,<br />incorrect translation, or omission of a requirement in the design<br />specification).<br /><br />Error Guessing<br />Test data selection technique for picking values that seem likely to cause<br />defects. This technique is based upon the theory that test cases and test data<br />can be developed based on the intuition and experience of the tester.<br /><br />Exhaustive Testing<br />Executing the program through all possible combinations of values for<br />program variables.<br /><br />Exit Criteria<br />Standards for work product quality, which block the promotion of incomplete<br />or defective work products to subsequent stages of the software development<br />process.<br /><br />File Comparison<br />Useful in identifying regression errors. A snapshot of the correct expected<br />results must be saved so it can be used for later comparison.<br /><br />Flowchart<br />Pictorial representations of data flow and computer logic. It is frequently<br />easier to understand and assess the structure and logic of an application system<br />by developing a flow chart than to attempt to understand narrative descriptions<br />or verbal explanations. The flowcharts for systems are normally developed<br />manually, while flowcharts of programs can be produced.<br /><br />Force Field Analysis<br />A group technique used to identify both driving and restraining forces that<br />influence a current situation.<br /><br />Formal Analysis<br />Technique that uses rigorous mathematical techniques to analyze the<br />algorithms of a solution for numerical properties, efficiency, and correctness.<br /><br />Functional Testing<br />Application of test data derived from the specified functional requirements<br />without regard to the final program structure.<br /><br />Histogram<br />A graphical description of individually measured values in a data set that is<br />organized according to the frequency or relative frequency of occurrence. A<br />histogram illustrates the shape of the distribution of individual values in a data<br />set along with information regarding the average and variation.<br />Infeasible Path<br />A sequence of program statements that can never be executed.<br /><br />Inputs<br />Materials, services, or information needed from suppliers to make a process<br />work, or build a product.<br /><br />Inspection<br />A formal assessment of a work product conducted by one or more qualified<br />independent reviewers to detect defects, violations of development standards,<br />and other problems. Inspections involve authors only when specific questions<br />concerning deliverables exist. An inspection identifies defects, but does not<br />attempt to correct them. Authors take corrective actions and arrange follow-up<br />reviews as needed.<br /><br />Instrumentation<br />The insertion of additional code into a program to collect information about<br />program behavior during program execution.<br /><br />Integration Testing<br />This test begins after two or more programs or application components have<br />been successfully unit tested. It is conducted by the development team to<br />validate the technical quality or design of the application. It is the first level of<br />testing which formally integrates a set of programs that communicate among<br />themselves via messages or files (a client and its server(s), a string of batch<br />programs, or a set of online modules within a dialog or conversation.)<br /><br />Invalid Input<br />Test data that lays outside the domain of the function the program represents.<br /><br />Leadership<br />The ability to lead, including inspiring others in a shared vision of what can<br />be, taking risks, serving as a role model, reinforcing and rewarding the<br />accomplishments of others, and helping others to act.<br /><br />Life Cycle Testing<br />The process of verifying the consistency, completeness, and correctness of<br />software at each stage of the development life cycle.<br /><br />Management<br />A team or individuals who manage(s) resources at any level of the<br />organization.<br /><br />Mapping<br />Provides a picture of the use of instructions during the execution of a program.<br />Specifically, it provides a frequency listing of source code statements showing<br />both the number of times an instruction was executed and which instructions<br />were not executed. Mapping can be used to optimize source code by<br />identifying the frequently used instructions. It can also be used to determine<br />unused code, which can demonstrate code, which has not been tested, code<br />that is infrequently used, or code that is non-entrant.<br /><br />Mean<br />A value derived by adding several quantities and dividing the sum by the<br />number of these quantities.<br /><br />Metric-Based Test Data Generation<br />The process of generating test sets for structural testing based on use of<br />complexity or coverage metrics.<br /><br />Model Animation<br />Model animation verifies that early models can handle the various types of<br />events found in production data. This is verified by “running” actual<br />production transactions through the models as if they were operational<br />systems.<br /><br />Model Balancing<br />Model balancing relies on the complementary relationships between the<br />various models used in structured analysis (event, process, data) to ensure that<br />modeling rules/standards have been followed; this ensures that these<br />complementary views are consistent and complete.<br /><br />Mission<br />A customer-oriented statement of purpose for a unit or a team.<br /><br />Mutation Analysis<br />A method to determine test set thoroughness by measuring the extent to which<br />a test set can discriminate the program from slight variants (i.e., mutants) of it.<br /><br />Network Analyzers<br />A tool used to assist in detecting and diagnosing network problems.<br /><br />Outputs<br />Products, services, or information supplied to meet customer needs.<br /><br />Pass/Fail Criteria<br />Decision rules used to determine whether a software item or feature passes or<br />fails a test.<br /><br />Path Expressions<br />A sequence of edges from the program graph that represents a path through<br />the program.<br /><br />Path Testing<br />A test method satisfying the coverage criteria that each logical path through<br />the program be tested. Often, paths through the program are grouped into a<br />finite set of classes and one path from each class is tested.<br /><br />Performance Test<br />Validates that both the online response time and batch run times meet the<br />defined performance requirements.<br /><br />Performance/Timing Analyzer<br />A tool to measure system performance.<br /><br />Phase (or Stage) Containment<br />A method of control put in place within each stage of the development process<br />to promote error identification and resolution so that defects are not<br />propagated downstream to subsequent stages of the development process. The<br />verification, validation, and testing of work within the stage that it is created.<br /><br />Policy<br />Managerial desires and intents concerning either process (intended objectives)<br />or products (desired attributes).<br /><br />Population Analysis<br />Analyzes production data to identify, independent from the specifications, the<br />types and frequency of data that the system will have to process/produce. This<br />verifies that the specs can handle types and frequency of actual data and can be<br />used to create validation tests.<br /><br />Procedure<br />The step-by-step method followed to ensure that standards are met.<br /><br />Process<br />1. The work effort that produces a product. This includes efforts of people<br />and equipment guided by policies, standards, and procedures.<br />2. The process or set of processes used by an organization or project to plan,<br />manage, execute, monitor, control, and improve its software related<br />activities. A set of activities and tasks. A statement of purpose and an<br />essential set of practices (activities) that address that purpose.<br /><br />Process Improvement<br />To change a process to make the process produce a given product faster, more<br />economically, or of higher quality. Such changes may require the product to<br />be changed. The defect rate must be maintained or reduced.<br /><br />Product<br />The output of a process: the work product. There are three useful classes of<br />products: Manufactured Products (standard and custom),<br />Administrative/Information Products (invoices, letters, etc.), and Service<br />Products (physical, intellectual, physiological, and psychological). A statement<br />of requirements defines products; one or more people working in a process<br />produce them.<br /><br />Product Improvement<br />To change the statement of requirements that defines a product to make the<br />product more satisfying and attractive to the customer (more competitive).<br />Such changes may add to or delete from the list of attributes and/or the list of<br />functions defining a product. Such changes frequently require the process to<br />be changed. Note: This process could result in a very new product.<br /><br />Production Costs<br />The cost of producing a product. Production costs, as currently reported,<br />consist of (at least) two parts; actual production or right-the-first time costs<br />(RFT) plus the Cost of Quality (COQ). RFT costs include labor, materials, and<br />equipment needed to provide the product correctly the first time.<br /><br />Productivity<br />The ratio of the output of a process to the input, usually measured in the same<br />units. It is frequently useful to compare the value added to a product by a<br />process, to the value of the input resources required (using fair market values<br />for both input and output).<br /><br />Proof of Correctness<br />The use of mathematical logic techniques to show that a relationship between<br />program variables assumed true at program entry implies that another<br />relationship between program variables holds at program exit.<br /><br />Quality<br />A product is a quality product if it is defect free. To the producer, a product is<br />a quality product if it meets or conforms to the statement of requirements that<br />defines the product. This statement is usually shortened to: quality means<br />meets requirements. From a customer’s perspective, quality means “fit for<br />use.”<br /><br />Quality Assurance (QA)<br />The set of support activities (including facilitation, training, measurement, and<br />analysis) needed to provide adequate confidence that processes are established<br />and continuously improved to produce products that meet specifications and<br />are fit for use.<br /><br />Quality Control (QC)<br />The process by which product quality is compared with applicable standards,<br />and the action taken when nonconformance is detected. Its focus is defect<br />detection and removal. This is a line function; that is, the performance of these<br />tasks is the responsibility of the people working within the process.<br /><br />Quality Function Deployment (QFD)<br />A systematic matrix method used to translate customer wants or needs into<br />product or service characteristics that will have a significant positive impact on<br />meeting customer demands.<br /><br />Quality Improvement<br />To change a production process so that the rate at which defective products<br />(defects) are produced is reduced. Some process changes may require the<br />product to be changed.<br /><br />Recovery Test<br />Evaluates the contingency features built into the application for handling<br />interruptions and for returning to specific points in the application processing<br />cycle, including checkpoints, backups, restores, and restarts. This test also<br />assures that disaster recovery is possible.<br /><br />Regression Testing<br />Testing of a previously verified program or application following program<br />modification for extension or correction to ensure no new defects have been<br />introduced.<br /><br />Requirement<br />A formal statement of:<br />1. An attribute to be possessed by the product or a function to be performed<br />by the product<br />2. The performance standard for the attribute or function; and/or<br />3. The measuring process to be used in verifying that the standard has been<br />met.<br /><br />Risk Matrix<br />Shows the controls within application systems used to reduce the identified<br />risk, and in what segment of the application those risks exist. One dimension<br />of the matrix is the risk, the second dimension is the segment of the application<br />system, and within the matrix at the intersections are the controls. For<br />example, if a risk is “incorrect input” and the systems segment is “data entry,”<br />then the intersection within the matrix would show the controls designed to<br />reduce the risk of incorrect input during the data entry segment of the<br />application system.<br /><br />Run Chart<br />A graph of data points in chronological order used to illustrate trends or cycles<br />of the characteristic being measured to suggest an assignable cause rather than<br />random variation.<br /><br />Scatter Plot Diagram<br />A graph designed to show whether there is a relationship between two<br />changing variables.<br /><br />Self-validating Code<br />Code that makes an explicit attempt to determine its own correctness and to<br />proceed accordingly.<br /><br />Simulation<br />Use of an executable model to represent the behavior of an object. During<br />testing, the computational hardware, the external environment, and even code<br />segments may be simulated.<br /><br />Software Feature<br />A distinguishing characteristic of a software item (e.g., performance,<br />portability, or functionality).<br /><br />Software Item<br />Source code, object code, job control code, control data, or a collection of<br />these.<br /><br />Special Test Data<br />Test data based on input values that are likely to require special handling by<br />the program.<br /><br />Standardize<br />Procedures that are implemented to ensure that the output of a process is<br />maintained at a desired level.<br /><br />Standards<br />The measure used to evaluate products and identify nonconformance. The<br />basis upon which adherence to policies is measured.<br /><br />Statement of Requirements<br />The exhaustive list of requirements that define a product. Note that the<br />statement of requirements should document requirements proposed and<br />rejected (including the reason for the rejection) during the requirement<br />determination process.<br /><br />Statement Testing<br />A test method that executes each statement in a program at least once during<br />program testing.<br /><br />Static Analysis<br />Analysis of a program that is performed without executing the program. It<br />may be applied to the requirements, design, or code.<br /><br />Statistical Process Control<br />The use of statistical techniques and tools to measure an ongoing process for<br />change or stability.<br /><br />Stress Testing<br />This test subjects a system, or components of a system, to varying<br />environmental conditions that defy normal expectations. For example, high<br />transaction volume, large database size or restart/recovery circumstances. The<br />intention of stress testing is to identify constraints and to ensure that there are<br />no performance problems.<br /><br />Structural Testing<br />A testing method in which the test data is derived solely from the program<br />structure.<br /><br />Stub<br />Special code segments that when invoked by a code segment under testing,<br />simulate the behavior of designed and specified modules not yet constructed.<br /><br />Supplier<br />An individual or organization that supplies inputs needed to generate a<br />product, service, or information to a customer.<br /><br />Symbolic Execution<br />A method of symbolically defining data that forces program paths to be<br />executed. Instead of executing the program with actual data values, the<br />variable names that hold the input values are used. Thus, all variable<br />manipulations and decisions are made symbolically. This process is used to<br />verify the completeness of the structure, as opposed to assessing the functional<br />requirements of the program.<br /><br />System<br />One or more software applications that together support a business function.<br /><br />System Test<br />During this event, the entire system is tested to verify that all functional,<br />information, structural and quality requirements have been met. A<br />predetermined combination of tests is designed that, when executed<br />successfully, satisfy management that the system meets specifications. System<br />testing verifies the functional quality of the system in addition to all external<br />interfaces, manual procedures, restart and recovery, and human-computer<br />interfaces. It also verifies that interfaces between the application and the open<br />environment work correctly, that JCL functions correctly, and that the<br />application functions appropriately with the Database Management System,<br />Operations environment, and any communications systems.<br /><br />Test<br />1. A set of one or more test cases.<br />2. A set of one or more test cases and procedures.<br /><br />Test Case Generator<br />A software tool that creates test cases from requirements specifications. Cases<br />generated this way ensure that 100% of the functionality specified is tested.<br /><br />Test Case Specification<br />An individual test condition, executed as part of a larger test that contributes to<br />the test’s objectives. Test cases document the input, expected results, and<br />execution conditions of a given test item. Test cases are broken down into one<br />or more detailed test scripts and test data conditions for execution.<br /><br />Test Cycle<br />Test cases are grouped into manageable (and schedulable) units called test<br />cycles. Grouping is according to the relation of objectives to one another,<br />timing requirements, and on the best way to expedite defect detection during<br />the testing event. Often test cycles are linked with execution of a batch<br />process.<br /><br />Test Data Generator<br />A software package that creates test transactions for testing application<br />systems and programs. The type of transactions that can be generated is<br />dependent upon the options available in the test data generator. With many<br />current generators, the prime advantage is the ability to create a large number<br />of transactions to volume test application systems.<br /><br />Test Data Set<br />Set of input elements used in the testing process.<br /><br />Test Design Specification<br />A document that specifies the details of the test approach for a software feature<br />or a combination of features and identifies the associated tests.<br /><br />Test Driver<br />A program that directs the execution of another program against a collection<br />of test data sets. Usually, the test driver also records and organizes the output<br />generated as the tests are run.<br /><br />Test Harness<br />A collection of test drivers and test stubs.<br /><br />Test Incident Report<br />A document describing any event during the testing process that requires<br />investigation.<br /><br />Test Item<br />A software item that is an object of testing.<br /><br />Test Item Transmittal Report<br />A document that identifies test items and includes status and location<br />information.<br /><br />Test Log<br />A chronological record of relevant details about the execution of tests.<br /><br />Test Plan<br />A document describing the intended scope, approach, resources, and schedule<br />of testing activities. It identifies test items, the features to be tested, the testing<br />tasks, the personnel performing each task, and any risks requiring contingency<br />planning.<br /><br />Test Procedure Specification<br />A document specifying a sequence of actions for the execution of a test.<br /><br />Test Scripts<br />A tool that specifies an order of actions that should be performed during a test<br />session. The script also contains expected results. Test scripts may be<br />manually prepared using paper forms, or may be automated using<br />capture/playback tools or other kinds of automated scripting tools.<br /><br />Test Stubs<br />Simulates a called routine so that the calling routine’s functions can be tested.<br />A test harness (or driver) simulates a calling component or external<br />environment, providing input to the called routine, initiating the routine, and<br />evaluating or displaying output returned.<br /><br />Test Suite Manager<br />A tool that allows testers to organize test scripts by function or other grouping.<br /><br />Test Summary Report<br />A document that describes testing activities and results and evaluates the<br />corresponding test items.<br /><br />Tracing<br />A process that follows the flow of computer logic at execution time. Tracing<br />demonstrates the sequence of instructions or a path followed in accomplishing<br />a given task. The two main types of trace are tracing instructions in computer<br />programs as they are executed, or tracing the path through a database to locate<br />predetermined pieces of information.<br /><br />Unit Test<br />Testing individual programs, modules, or components to demonstrate that the<br />work package executes per specification, and validate the design and technical<br />quality of the application. The focus is on ensuring that the detailed logic<br />within the component is accurate and reliable according to pre-determined<br />specifications. Testing stubs or drivers may be used to simulate behavior of<br />interfacing modules.<br /><br />Usability Test<br />The purpose of this event is to review the application user interface and other<br />human factors of the application with the people who will be using the<br />application. This is to ensure that the design (layout and sequence, etc.)<br />enables the business functions to be executed as easily and intuitively as<br />possible. This review includes assuring that the user interface adheres to<br />documented User Interface standards, and should be conducted early in the<br />design stage of development. Ideally, an application prototype is used to walk<br />the client group through various business scenarios, although paper copies of<br />screens, windows, menus, and reports can be used.<br /><br />User<br />The customer that actually uses the product received.<br /><br />User Acceptance Test<br />User Acceptance Testing (UAT) is conducted to ensure that the system meets<br />the needs of the organization and the end user/customer. It validates that the<br />system will work as intended by the user in the real world, and is based on real<br />world business scenarios, not system requirements. Essentially, this test<br />validates that the right system was built.<br /><br />Valid Input<br />Test data that lie within the domain of the function represented by the<br />program.<br /><br />Validation<br />Determination of the correctness of the final program or software produced<br />from a development project with respect to the user needs and requirements.<br />Validation is usually accomplished by verifying each stage of the software<br />development life cycle.<br /><br />Values (Sociology)<br />The ideals, customs, instructions, etc., of a society toward which the people<br />have an affective regard. These values may be positive, as cleanliness,<br />freedom, or education, or negative, as cruelty, crime, or blasphemy. Any<br />object or quality desired as a means or as an end in itself.<br /><br />Verification<br />1. The process of determining whether the products of a given phase of the<br />software development cycle fulfill the requirements established during the<br />previous phase.<br />2. The act of reviewing, inspecting, testing, checking, auditing, or otherwise<br />establishing and documenting whether items, processes, services, or<br />documents conform to specified requirements.<br /><br />Vision<br />A vision is a statement that describes the desired future state of a unit.<br /><br />Walkthroughs<br />During a walkthrough, the producer of a product “walks through” or<br />paraphrases the products content, while a team of other individuals follow<br />along. The team’s job is to ask questions and raise issues about the product<br />that may lead to defect identification.<br /><br />White-box Testing<br />A testing technique that assumes that the path of the logic in a program unit or<br />component is known. White-box testing usually consists of testing paths,<br />branch by branch, to produce predictable results. This technique is usually<br />used during tests executed by the development team, such as Unit or<br />Component testing, </span></div>Rajesh Babu Rajamanickamhttp://www.blogger.com/profile/16036650026261051481noreply@blogger.com0tag:blogger.com,1999:blog-8180223062941193498.post-3646981091559518292010-09-27T17:15:00.000-07:002010-09-27T17:52:54.815-07:00QA - Definitions<strong></strong><br /><strong>Acceptance Testing: </strong>Testing conducted to enable a user/customer to determine whether to accept a software product. Normally performed to validate the software meets a set of agreed acceptance criteria.<br /><br /><strong>Accessibility Testing:</strong> Verifying a product is accessible to the people having disabilities (deaf, blind, mentally disabled etc.).<br /><br /><strong>Ad Hoc Testing:</strong> A testing phase where the tester tries to 'break' the system by randomly trying the system's functionality. Can include negative testing as well. See also Monkey Testing.<br /><br /><strong>Agile Testing:</strong> Testing practice for projects using agile methodologies, treating development as the customer of testing and emphasizing a test-first design paradigm. See also Test Driven Development.<br />Application Binary Interface (ABI): A specification defining requirements for portability of applications in binary forms across deferent system platforms and environments.<br /><br /><strong>Application Programming Interface (API):</strong> A formalized set of software calls and routines that can be referenced by an application program in order to access supporting system or network services.<br /><strong>Automated Software Quality (ASQ):</strong> The use of software tools, such as automated testing tools, to improve software quality.<br /><br /><strong>Automated Testing</strong>:<br />• Testing employing software tools which execute tests without manual intervention. Can be applied in GUI, performance, API, etc. testing.<br />• The use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions.<br /><strong>Backus-Naur Form:</strong> A metalanguage used to formally describe the syntax of a language.<br />Basic Block:<strong> A sequence of one or more consecutive, executable statements containing no</strong> branches.<br /><br /><strong>Basis Path Testing:</strong> A white box test case design technique that uses the algorithmic flow of the program to design tests.<br /><strong>Basis Set</strong>: The set of tests derived using<br /><strong>Baseline:</strong> The point at which some deliverable produced during the software engineering process is put under formal change control.<br /><strong>Benchmark Testing:</strong> Tests that use representative sets of programs and data designed to evaluate the performance of computer hardware and software in a given configuration.<br /><strong>Beta Testing:</strong> Testing of a re-release of a software product conducted by customers.<br />Binary Portability Testing: Testing an executable application for portability across system platforms and environments, usually for conformation to an ABI specification.<br /><strong>Black Box Testing:</strong> Testing based on an analysis of the specification of a piece of software without reference to its internal workings. The goal is to test how well the component conforms to the published requirements for the component.<br />Bottom Up Testing: An approach to integration testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.<br /><strong>Boundary Testing:</strong> Test which focus on the boundary or limit conditions of the software being tested. (Some of these tests are stress tests).<br />Boundary Value Analysis: In boundary value analysis, test cases are generated using the extremes of the input domaini, e.g. maximum, minimum, just inside/outside boundaries, typical values, and error values. BVA is similar to Equivalence Partitioning but focuses on "corner cases".<br /><strong>Branch Testing:</strong> Testing in which all branches in the program source code are tested at least once.<br /><strong>Breadth Testing:</strong> A test suite that exercises the full functionality of a product but does not test features in detail.<br /><strong>Bug:</strong> A fault in a program which causes the program to perform in an unintended or unanticipated manner.<br /><strong>CAST:</strong> Computer Aided Software Testing.<br />Capture/Replay Tool: A test tool that records test input as it is sent to the software under test. The input cases stored can then be used to reproduce the test at a later time. Most commonly applied to GUI test tools.<br /><strong>CMM:</strong> The Capability Maturity Model for Software (CMM or SW-CMM) is a model for judging the maturity of the software processes of an organization and for identifying the key practices that are required to increase the maturity of these processes.<br /><strong>Cause Effect Graph:</strong> A graphical representation of inputs and the associated outputs effects which can be used to design test cases.<br /><strong>Code Complete:</strong> Phase of development where functionality is implemented in entirety; bug fixes are all that are left. All functions found in the Functional Specifications have been implemented.<br /><strong>Code Coverage:</strong> An analysis method that determines which parts of the software have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention.<br /><strong>Code Inspection:</strong> A formal testing technique where the programmer reviews source code with a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards.<br /><strong>Code Walkthrough:</strong> A formal testing technique where source code is traced by a group with a small set of test cases, while the state of program variables is manually monitored, to analyze the programmer's logic and assumptions.<br />Coding: The generation of source code.<br />Compatibility Testing: Testing whether software is compatible with other elements of a system with which it should operate, e.g. browsers, Operating Systems, or hardware.<br /><strong>Component:</strong> A minimal software item for which a separate specification is available.<br /><strong>Component Testing:</strong> See Unit Testing.<br />Concurrency Testing:Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores.<br /><strong>Conformance Testing:</strong> The process of testing that an implementation conforms to the specification on which it is based. Usually applied to testing conformance to a formal standard.<br /><strong>Context Driven Testing: </strong>The context-driven school of software testing is flavor of Agile Testing that advocates continuous and creative evaluation of testing opportunities in light of the potential information revealed and the value of that information to the organization right now.<br /><strong>Conversion Testing:</strong> Testing of programs or procedures used to convert data from existing systems for use in replacement systems.<br /><strong>Cyclomatic Complexity:</strong> A measure of the logical complexity of an algorithm, used in white-box testing.<br /><strong>Data Dictionary:</strong> A database that contains definitions of all data items defined during analysis.<br /><strong>Data Flow Diagram:</strong> A modeling notation that represents a functional decomposition of a system.<br />Data Driven Testing: Testing in which the action of a test case is parameterized by externally defined data values, maintained as a file or spreadsheet. A common technique in Automated Testing.<br /><strong>Debugging: </strong>The process of finding and removing the causes of software failures.<br /><strong>Defect:</strong> Nonconformance to requirements or functional / program specification<br />Dependency Testing: Examines an application's requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.<br />Depth Testing: A test that exercises a feature of a product in full detail.<br />Dynamic Testing: Testing software through executing it. See also Static Testing.<br /><strong>Emulator:</strong> A device, computer program, or system that accepts the same inputs and produces the same outputs as a given system.<br />Endurance Testing: Checks for memory leaks or other problems that may occur with prolonged execution.<br />End-to-End testing: Testing a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.<br />Equivalence Class: A portion of a component's input or output domains for which the component's behaviour is assumed to be the same from the component's specification.<br />Equivalence Partitioning: A test case design technique for a component in which test cases are designed to execute representatives from equivalence classes.<br />Exhaustive Testing: Testing which covers all combinations of input values and preconditions for an element of the software under test.<br />Functional Decomposition: A technique used during planning, analysis and design; creates a functional hierarchy for the software.<br />Functional Specification: A document that describes in detail the characteristics of the product with regard to its intended features.<br />Functional Testing: See also Black Box Testing.<br />• Testing the features and operational behavior of a product to ensure they correspond to its specifications.<br />• Testing that ignores the internal mechanism of a system or component and focuses solely on the outputs generated in response to selected inputs and execution conditions.<br /><strong>Glass Box Testing:</strong> A synonym for White Box Testing.<br /><strong>Gorilla Testing:</strong> Testing one particular module, functionality heavily.<br /><strong>Gray Box Testing:</strong> A combination of Black Box and White Box testing methodologies: testing a piece of software against its specification but using some knowledge of its internal workings.<br /><strong>High Order Tests:</strong> Black-box tests conducted once the software has been integrated.<br />Independent Test Group (ITG): A group of people whose primary responsibility is software testing,<br /><strong>Inspection:</strong> A group review quality improvement process for written material. It consists of two aspects; product (document itself) improvement and process improvement (of both document production and inspection).<br /><strong>Integration Testing:</strong> Testing of combined parts of an application to determine if they function together correctly. Usually performed after unit and functional testing. This type of testing is especially relevant to client/server and distributed systems.<br /><strong>Installation Testing:</strong>Confirms that the application under test recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.<br /><strong>Localization Testing:</strong> This term refers to making software specifically designed for a specific locality.<br /><strong>Loop Testing:</strong> A white box testing technique that exercises program loops.<br /><strong>Metric:</strong>A standard of measurement. Software metrics are the statistics describing the structure or content of a program. A metric should be a real objective measurement of something such as number of bugs per lines of code.<br /><strong>Monkey Testing:</strong> Testing a system or an Application on the fly, i.e. just few tests here and there to ensure the system or an application does not crash out.<br /><strong>Mutation Testing:</strong> Testing done on the application where bugs are purposely added to it.<br /><strong>Negative Testing:</strong> Testing aimed at showing software does not work. Also known as "test to fail".<br />N+1 Testing:A variation of Regression Testing. Testing conducted with multiple cycles in which errors found in test cycle N are resolved and the solution is retested in test cycle N+1. The cycles are typically repeated until the solution reaches a steady state and there are no errors. See also Regression Testing.<br /><strong>Path Testing:</strong> Testing in which all paths in the program source code are tested at least once.<br /><strong>Performance Testing:</strong> Testing conducted to evaluate the compliance of a system or component with specified performance requirements. Often this is performed using an automated test tool to simulate large number of users. Also know as "Load Testing".<br /><strong>Positive Testing:</strong> Testing aimed at showing software works. Also known as "test to pass".<br /><strong>Quality Assurance:</strong> All those planned or systematic actions necessary to provide adequate confidence that a product or service is of the type and quality needed and expected by the customer.<br /><strong>Quality Audit:</strong> A systematic and independent examination to determine whether quality activities and related results comply with planned arrangements and whether these arrangements are implemented effectively and are suitable to achieve objectives.<br /><strong>Quality Circle:</strong> A group of individuals with related interests that meet at regular intervals to consider problems or other matters related to the quality of outputs of a process and to the correction of problems or to the improvement of quality.<br /><strong>Quality Control:</strong> The operational techniques and the activities used to fulfill and verify requirements of quality.<br /><strong>Quality Management:</strong> That aspect of the overall management function that determines and implements the quality policy.<br /><strong>Quality Policy:</strong> The overall intentions and direction of an organization as regards quality as formally expressed by top management.<br /><strong>Quality System:</strong> The organizational structure, responsibilities, procedures, processes, and resources for implementing quality management.<br /><strong>Race Condition:</strong> A cause of concurrency problems. Multiple accesses to a shared resource, at least one of which is a write, with no mechanism used by either to moderate simultaneous access.<br />Ramp Testing: Continuously raising an input signal until the system breaks down.<br />Recovery Testing:Confirms that the program recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.<br />Regression Testing: Retesting a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made.<br /><strong>Release Candidate</strong>:A pre-release version, which contains the desired functionality of the final version, but which needs to be tested for bugs (which ideally should be removed before the final version is released).<br /><strong>Sanity Testing:</strong> Brief test of major functional elements of a piece of software to determine if its basically operational. See also Smoke Testing.<br /><strong>Scalability Testing:</strong> Performance testing focused on ensuring the application under test gracefully handles increases in work load.<br /><strong>Security Testing</strong>:Testing which confirms that the program can restrict access to authorized personnel and that the authorized personnel can access the functions available to their security level.<br /><strong>Smoke Testing:</strong> A quick-and-dirty test that the major functions of a piece of software work. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire.<br /><strong>Soak Testing:</strong> Running a system at high load for a prolonged period of time. For example, running several times more transactions in an entire day (or night) than would be expected in a busy day, to identify and performance problems that appear after a large number of transactions have been executed.<br />Software Requirements Specification: A deliverable that describes all data, functional and behavioral requirements, all constraints, and all validation requirements for software/<br /><strong>Software Testing:</strong> A set of activities conducted with the intent of finding errors in software.<br /><strong>Static Analysis:</strong> Analysis of a program carried out without executing the program.<br /><strong>Static Analyzer:</strong> A tool that carries out static analysis.<br /><strong>Static Testing:</strong> Analysis of a program carried out without executing the program.<br /><strong>Storage Testing:</strong>Testing that verifies the program under test stores data files in the correct directories and that it reserves sufficient space to prevent unexpected termination resulting from lack of space. This is external storage as opposed to internal storage.<br /><strong>Stress Testing:</strong> Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements to determine the load under which it fails and how. Often this is performance testing using a very high level of simulated load.<br /><strong>Structural Testing:</strong> Testing based on an analysis of internal workings and structure of a piece of software. See also White Box Testing.<br /><strong>System Testing</strong>: Testing that attempts to discover defects that are properties of the entire system rather than of its individual components.<br /><strong>Testability:</strong>The degree to which a system or component facilitates the establishment of test criteria and the performance of tests to determine whether those criteria have been met.<br /><strong>Testing: </strong><br />• The process of exercising software to verify that it satisfies specified requirements and to detect errors.<br />• The process of analyzing a software item to detect the differences between existing and required conditions (that is, bugs), and to evaluate the features of the software item (Ref. IEEE Std 829).<br />• The process of operating a system or component under specified conditions, observing or recording the results, and making an evaluation of some aspect of the system or component.<br /><strong>Test Bed:</strong> An execution environment configured for testing. May consist of specific hardware, OS, network topology, configuration of the product under test, other application or system software, etc. The Test Plan for a project should enumerated the test beds(s) to be used.<br /><strong>Test Case: </strong><br />• Test Case is a commonly used term for a specific test. This is usually the smallest unit of testing. A Test Case will consist of information such as requirements testing, test steps, verification steps, prerequisites, outputs, test environment, etc.<br />• A set of inputs, execution preconditions, and expected outcomes developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement.<br />Test Driven Development: Testing methodology associated with Agile Programming in which every chunk of code is covered by unit tests, which must all pass all the time, in an effort to eliminate unit-level and regression bugs during development. Practitioners of TDD write a lot of tests, i.e. an equal number of lines of test code to the size of the production code.<br /><strong>Test Driver:</strong> A program or test tool used to execute a tests. Also known as a Test Harness.<br /><strong>Test Environment:</strong> The hardware and software environment in which tests will be run, and any other software with which the software under test interacts when under test including stubs and test drivers.<br /><strong>Test First Design:</strong> Test-first design is one of the mandatory practices of Extreme Programming (XP).It requires that programmers do not write any production code until they have first written a unit test.<br /><strong>Test Harness:</strong> A program or test tool used to execute a tests. Also known as a Test Driver.<br /><strong>Test Plan:</strong> A document describing the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning. Ref IEEE Std 829.<br /><strong>Test Procedure:</strong> A document providing detailed instructions for the execution of one or more test cases.<br /><strong>Test Scenario:</strong> Definition of a set of test cases or test scripts and the sequence in which they are to be executed.<br /><strong>Test Script:</strong> Commonly used to refer to the instructions for a particular test that will be carried out by an automated test tool.<br /><strong>Test Specification:</strong> A document specifying the test approach for a software feature or combination or features and the inputs, predicted results and execution conditions for the associated tests.<br /><strong>Test Suite:</strong> A collection of tests used to validate the behavior of a product. The scope of a Test Suite varies from organization to organization. There may be several Test Suites for a particular product for example. In most cases however a Test Suite is a high level concept, grouping together hundreds or thousands of tests related by what they are intended to test.<br /><strong>Test Tools:</strong> Computer programs used in the testing of a system, a component of the system, or its documentation.<br /><strong>Thread Testing:</strong> A variation of top-down testing where the progressive integration of components follows the implementation of subsets of the requirements, as opposed to the integration of components by successively lower levels.<br /><strong>Top Down Testing:</strong> An approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.<br /><strong>Total Quality Management:</strong> A company commitment to develop a process that achieves high quality product and customer satisfaction.<br /><strong>Traceability Matrix:</strong> A document showing the relationship between Test Requirements and Test Cases.<br />U Usability Testing: Testing the ease with which users can learn and use a product.<br /><strong>Use Case:</strong> The specification of tests that are conducted from the end-user perspective. Use cases tend to focus on operating software as an end-user would conduct their day-to-day activities.<br /><strong>User Acceptance Testing:</strong> A formal product evaluation performed by a customer as a condition of purchase.<br /><strong>Unit Testing:</strong> Testing of individual software components.<br /><strong>Validation:</strong> The process of evaluating software at the end of the software development process to ensure compliance with software requirements. The techniques for validation is testing, inspection and reviewing.<br /><strong>Verification:</strong> The process of determining whether of not the products of a given phase of the software development cycle meet the implementation steps and can be traced to the incoming objectives established during the previous phase. The techniques for verification are testing, inspection and reviewing.<br /><strong>Volume Testing:</strong> Testing which confirms that any values that may become large over time (such as accumulated counts, logs, and data files), can be accommodated by the program and will not cause the program to stop working or degrade its operation in any manner.<br /><strong>Walkthrough:</strong> A review of requirements, designs or code characterized by the author of the material under review guiding the progression of the review.<br /><strong>White Box Testing:</strong> Testing based on an analysis of internal workings and structure of a piece of software. Includes techniques such as Branch Testing and Path Testing. Also known as Structural Testing and Glass Box Testing. Contrast with Black Box Testing.<br /><strong>Workflow Testing: </strong>Scripted end-to-end testing which duplicates specific workflows which are expected to be utilized by the end-user.Rajesh Babu Rajamanickamhttp://www.blogger.com/profile/16036650026261051481noreply@blogger.com0tag:blogger.com,1999:blog-8180223062941193498.post-66187506416657545162010-09-27T17:41:00.000-07:002010-09-27T17:42:09.679-07:00Dictionary of Insurance Terms<span style="font-family:arial;">A<br />• Absolute Liability: Liability for damages even though fault or negligence cannot be proven.<br />• Accident: An event or occurrence which is unforeseen and unintended.<br />• Accidental Bodily Injury: Injury to the body as the result of an accident.<br />• Accounting: The process of recording, summarizing, and allocating all items of income and expense of the company and analyzing, verifying, and reporting the results.<br />• Act of God: A flood, earthquake or other nonpreventable accident resulting from natural causes that occur without any human intervention.<br />• Activities of Daily Living: A list of activities, normally including mobility, dressing, bathing, toileting, transferring, and eating which are used to assess degree of impairment and determine eligibility for some types of insurance benefits.<br />• Actual Cash Value (ACV): 1) The cost of replacing or restoring property at prices prevailing at the time and place of the loss, less depreciation, however caused; 2) replacement cost minus depreciation.<br />• Actuarially Fair: The price for insurance which exactly represents the expected losses<br />• Actuary: A person professionally trained in the technical aspects of pensions, insurance and related fields. The actuary estimates how much money must be contributed to an insurance or pension fund in order to provide future<br />• Additional insured: A person, company or entity protected by an insurance policy in addition to the insured.<br />• Adjuster: A person who investigates and settles losses for an insurance carrier.<br />• Adjusting: The process of investigating and settling losses with or by an insurance carrier.<br />• Adjustment Bureau: Organization for adjusting insurance claims that is supported by insurers using the bureau's services.<br />• Administrative Services Only (AS0) Plan: An arrangement under which an insurance carrier or an independent organization will, for a fee, handle the administration of claims, benefits and other administrative functions for a selfinsured group.<br />• Advance Premium Mutual: Mutual insurance company owned by the policy owners that does not issue assessable policies but charges premiums expected to be sufficient to pay all claims and expenses.<br />• Adverse Selection: The tendency of persons who present a poorerthanaverage risk to apply for, or continue, insurance to a greater extent than do persons with average or betterthanaverage expectations of loss.<br />• Age Limits: Stipulated minimum and maximum ages below and above which the company will not accept applications or may not renew policies.<br />• Agent: An insurance company representative licensed by the state who solicits, negotiates or effects contracts of insurance, and provides service to the policyholder for the insurer.<br />• Aggregate Deductible: Deductible in some property and health insurance contracts in which all covered losses during a year are added together and the insurer pays only when the aggregate deductible amount is exceeded.<br />• Aggregate Indemnity: The maximum dollar amount that may be collected for any disability or period of disability under the policy.<br />• Alien Insurer: An insurance company domiciled in another country.<br />• Allied Lines: A term for forms of property insurance allied with fire insurance, covering such perils as windstorm, hail, explosion, and riot.<br />• Allocated Benefits: Benefits for which the maximum amount payable for specific services is itemized in the contract.<br />• Allrisks Policy: Coverage by an insurance contract that promises to cover all losses except those losses specifically excluded in the policy. See also: Risks of direct loss to property.<br />• Amendment: A formal document changing the provisions of an insurance policy signed jointly by the insurance company officer and the policy holder or his authorized representative.<br />• Amortization: Paying an interestbearing liability by gradual reduction through a series of installments, as opposed to one lumpsum payment.<br />• Annual Statement: The annual report, as of December 31, of an insurer to a state insurance department, showing assets and liabilities, receipts and disbursements, and other financial data.<br />• Application: A signed statement of facts made by a person applying for life insurance and then used by the insurance company to decide whether or not to issue a policy. The application becomes part of the insurance contract when the policy is issued.<br />• Arbitration: Arbitration: A form of alternative dispute resolution where an unbiased person or panel renders an opinion as to responsibility for or extent of a loss.<br />• Arson: The willful and malicious burning of, or attempt to burn, any structure or other property, often with criminal or fraudulent intent.<br />• Assessment Association: An insurer that does not charge a fixed premium for insurance, but rather assesses its members periodically to pay its losses. Assessment insurers usually collect an advance premium which is estimated to cover losses and expenses, but reserve the right to make additional assessments whenever the premium collected is insufficient.<br />• Assessment Mutual: Mutual insurance company that has the right to assess policy owners for losses and expenses.<br />• Assets: All funds, property, goods, securities, rights of action, or resources of any kind owned by an insurance company. Statutory accounting, however, excludes nonadmitted assets, such as deferred or overdue premiums, that would be considered assets under generally accepted accounting principles (GAAP).<br />• Assignment: The legal transfer of one person's interest in an insurance policy to another person.<br />• Association Captive: Type of captive insurer owned by members of a sponsoring organization or group, such as a trade association.<br />• Association Group: A group formed from members of a trade or a professional association for group insurance under one master health insurance contract.<br />• Assumption of Risk Doctrine: Defense against a negligence claim that bars recovery for damages if a person understands and recognizes the danger inherent in a particular activity or occupation.<br />• Attractive Nuisance: Condition that can attract and injure children. Occupants of land on which such a condition exists are liable for injuries to children.<br />• Automatic Reinsurance: An agreement that the insurer must cede and the reinsurer must accept all risks within certain explicitly defined limits. The reinsurer undertakes in advance to grant reinsurance to the extent specified in the agreement in every case where the ceding company accepts the application and retains its own limit.<br />• Automobile Insurance Plan: One of several types of "shared market" mechanisms where persons who are unable to obtain such insurance in the voluntary market are assigned to a particular company, usually at a higher rate than the voluntary market. Formerly called "Assigned Risk."<br />• Automobile Liability Insurance: Protection for the insured against financial loss because of legal liability for carrelated injuries to others or damage to their property.<br />• Automobile Physical Damage Insurance: Coverage to pay for damage to or loss of an insured automobile resulting from collision, fire, theft, or other perils.<br />• Automobile Reinsurance Facility: One of several types of "shared market" mechanisms used to make automobile insurance available to persons who are unable to obtain such insurance in the regular market.<br />• Aviation Insurance: Aircraft insurance including coverage of aircraft or their contents, the owner's liability, and accident insurance on the passengers. Beneficiary: The person designated or provided for by the policy terms to receive any benefits provided by the policy or plan upon the death of the insured.<br />• Average Indexed Monthly Earnings (AIME): Under the OASDI program, the person's actual earnings are indexed to determine his or her primary insurance amount (PIA).<br />• Avoidance: see Loss Avoidance. </span><br /><span style="font-family:arial;"><br />B<br />• Bailees Customers Policy: Policy that covers the loss or damage to property of customers regardless of a bailee's legal liability.<br />• Basic Form: see Dwelling Property 1.<br />• Basis: An amount attributed to an asset for income tax purposes; used to determine gain or loss on sale or transfer; used to determine the value of a gift.<br />• Benefits: The amount payable by the insurance company to a claimant, assignee or beneficiary under each coverage.<br />• Binder: A written or oral contract issued temporarily to place insurance in force when it is not possible to issue a new policy or endorse the existing policy immediately. A binder is subject to the premium and all the terms of the policy to be issued.<br />• Binding Receipt: A receipt given for a premium payment accompanying the application for insurance. If the policy is approved, this binds the company to make the policy effective from the date of the receipt.<br />• Blanket Medical Expense: A provision which entitles the insured person to collect up to a maximum established in the policy for all hospital and medical expenses incurred, without any limitations on individual types of medical expenses.<br />• Boat Owners Package Policy: A special package policy for boat owners that combines physical damage insurance, medical expense insurance, liability insurance, and other coverage's in one contract.<br />• Boiler and Machinery Insurance: Coverage for loss arising out of the operation of pressure, mechanical, and electrical equipment. It covers loss of the boiler and machinery itself, damage to other property, and business interruption losses.<br />• Bond: A certificate issued by a government or corporation as evidence of a debt. The issuer of the bond promises to pay the bondholder a specified amount of interest for a specified period and to repay the loan on the expiration (maturity) date.<br />• Book of Business: the number, size and type of accounts (policyholders) that an agent "owns."<br />• Branch Office System: Type of life insurance marketing system under which branch offices are established in various areas. Salaried branch managers, who are employees of the company, are responsible for hiring and training new agents.<br />• Break in Service: A calendar year, plan year or other consecutive 12month period designated by the plan during which a plan participant does not complete more than 500 hours of service.<br />• Broad Form: see Dwelling Property 2; Homeowners 2 Policy.<br />• Broker: A marketing specialist who represents buyers of property and liability insurance and who deals with either agents or companies in arranging for the coverage required by the customer.<br />• Burglary: Breaking and entering into another person's property with felonious intent.<br />• Burglary and Theft Insurance: Coverage against property losses due to burglary, robbery, or larceny.<br />• Business Insurance: A policy which primarily provides coverage of benefits to a business as contrasted to an individual. It is issued to indemnify a business for the loss of services of a key employee or a partner who becomes disabled.<br />• Business Interruption Insurance: Protection for a business owner against losses resulting from a temporary shutdown because of fire or other insured peril. The insurance provides reimbursement for lost net profits and necessary continuing expenses.<br />• Business Life Insurance: Life insurance purchased by a business enterprise on the life of a member of the firm. It is often bought by partnerships to protect the surviving partners against loss caused by the death of a partner, or by a corporation to reimburse it for loss caused by the death of a key employee.<br />• BuySell Agreement: An agreement made by the owners of a business to purchase the share of a disabled or deceased owner. The value of each owner's share of the business and the exact terms of the buyingandselling process are established before death or the beginning of disability.<br />C<br />• Cancellation: The discontinuance of an insurance policy before its normal expiration date, either by the insured or the company.<br />• Capacity: The amount of capital available to an insurance company or to the industry as a whole for underwriting general insurance coverage or coverage for specific perils.<br />• Capital Gain: Profit realized on the sale of securities. An unrealized capital gain is an increase in the value of securities that have not been sold.<br />• Capital Retention Approach: A method used to estimate the amount of life insurance to own. Under this method, the insurance proceeds are retained and are not liquidated.<br />• Captive Insurance Company: A company owned solely or in large part by one or more noninsurance entities for the primary purpose of providing insurance coverage to the owner or owners.<br />• Captive Insurer: Insurance company established and owned by a parent firm in order to insure its loss exposures while reducing premium costs, providing easier access to a reinsurer, and perhaps easing tax burdens. See also Association captive; Pure captive.<br />• Cargo Insurance: Type of ocean marine insurance that protects the shipper of the goods against financial loss if the goods are damaged or lost.<br />• Casualty Insurance: Insurance concerned with the insider's legal liability for injuries to others or damage to other persons' property; also encompasses such forms of insurance as plate glass, burglary, robbery and workers' compensation.<br />• Catastrophe: Event which causes a loss of extraordinary magnitude, such as a hurricane or tornado.<br />• Causesofloss Form: Form added to commercial property insurance policy that indicates the causes of loss that are covered. There are four causesofloss forms: basic, broad, special, and earthquake.<br />• Cede: To transfer all or part of a risk written by an insurer (the ceding, or primary company) to a reinsurer.<br />• Certificate of Insurance: A statement of coverage issued to an individual insured under a group insurance contract, outlining the insurance benefits and principal provisions applicable to the member.<br />• Certified Financial Planner (CFP): Professional who has attained a high degree of technical competency in financial planning and has passed a series of professional examinations by the College of Financial Planning.<br />• Certified Insurance Counselor (CIC): Professional in property and liability insurance who has passed a series of examinations by the Society of Certified Insurance Counselors.<br />• Cession: Amount of the insurance ceded to a reinsurer by the original insuring company in a reinsurance operation.<br />• Chartered Life<br />• Chartered Property and Casualty Underwriter (CPCU): Professional who has attained a high degree of technical competency in property and liability insurance and has passed ten professional examinations administered by the American Institute for Property and Liability Underwriters.<br />• Choice nofault: Allows auto insureds the choice of remaining under the tort system or choosing nofault at a reduced premium.<br />• Claim: A request for payment of a loss which may come under the terms of an insurance contract.<br />• Claims Adjustor: Person who settles claims: an agent, company adjustor, independent adjustor, adjustment bureau, or public adjustor.<br />• Claimmade policy: A liability insurance policy under which coverage applies to claims filed during the policy period.<br />• Class Rating: Ratemaking method in which similar insureds are placed in the same underwriting class and each is charged the same rate. Also called manual rating.<br />• Coinsurance: 1) A provision under which an insured who carries less than the stipulated percentage of insurance to value, will receive a loss payment that is limited to the same ratio which the amount of insurance bears to the amount required; 2) a policy provision frequently found in medical insurance, by which the insured person and the insurer share the covered losses under a policy in a specified ratio, i.e., 80 percent by the insurer and 20 percent by the insured.<br />• Collateral Source Rule: Under this rule, the defendant cannot introduce any evidence that shows the injured party has received compensation from other collateral sources.<br />• Collision Insurance: Protection against loss resulting from any damage to the policyholder's car caused by collision with another vehicle or object, or by upset of the insured car, whether it was the insured's fault or not.<br />• Combined Ratio: Basically, a measure of the relationship between dollars spent for claims and expenses and premium dollars taken in; more specifically, the sum of the ratio of losses incurred to premiums earned and the ratio of commissions and expenses incurred to premiums written. A ratio above 100 means that for every premium dollar taken in, more than a dollar went for losses, expenses, and commissions.<br />• Commercial General Liability Policy (CGL): Commercial liability policy drafted by the Insurance Services Office containing two coverage forms, an occurrence form and a claimsmade form.<br />• Commercial Lines: Insurance for businesses, organizations, institutions, governmental agencies, and other commercial establishments.<br />• Commercial Multiple Peril Policy: A package of insurance that includes a wide range of essential coverages for the commercial establishment.<br />• Commercial Package Policy (CPP): A commercial policy that can be designed to meet the specific insurance needs of business firms. Property and liability coverage forms are combined to form a single policy.<br />• Commission: The part of an insurance premium paid by the insurer to an agent or broker for his services in procuring and servicing the insurance.<br />• Commissioner: A state officer who administers the state's insurance laws and regulations. In some states, this regulator is called the director or superintendent of insurance.<br />• Common Stock: Securities that represent an ownership interest in a corporation.<br />• Community Property: A special ownership form requiring that one half of all property earned by a husband or wife during marriage belongs to each. Community property laws do not generally apply to property acquired by gift, by will, or by descent.<br />• Company Adjuster: Claims adjuster who is a salaried employee representing only one company.<br />• Comparative Negligence: Under this concept a plaintiff (the person bringing suit) may recover damages even though guilty of some negligence. His or her recovery, however, is reduced by the amount or percent of that negligence.<br />• Completed Operations: Liability arising out of faulty work performed away from the premises after the work or operations are completed. Applicable to contractors, plumbers, electricians, repair shops, and similar firms.<br />• Comprehensive Automobile Insurance: Protection against loss resulting from damage to the insured auto, other than loss by collision or upset.<br />• Comprehensive<br />• Comprehensive Personal Liability Insurance: Protection against loss arising out of legal liability to pay money for damage or injury to others for which the insured is responsible. It does not include automobile or business operation liabilities.<br />• Compulsory Auto Liability Insurance: Insurance laws in some states required motorists to carry at least certain minimum auto coverages. This is called "compulsory" insurance.<br />• Compulsory Insurance: Any form of insurance which is required by law.<br />• Compulsory Insurance Law: Law protecting accident victims against irresponsible motorists by requiring owners and operators of automobiles to carry certain amounts of liability insurance in order to license the vehicle and drive legally within the state.<br />• Concealment: Deliberate failure of an applicant for insurance to reveal a material fact to the insurer.<br />• Concurrent Causation: Legal doctrine that states when a property loss is due to two causes, one that is excluded and one that is covered, the policy provides coverage.<br />• Conditional Receipt: A receipt given for premium payments accompanying an application for insurance. If the application is approved as applied for, the coverage is effective as of the date of the prepayment or the date on which the last of the underwriting requirements, such as a medical examination, has been fulfilled.<br />• Conditions: Provisions inserted in an insurance contract that qualify or place limitations on the insurer's promise to perform.<br />• Conservation: The attempt by the insurer to prevent the lapse of a policy.<br />• Consideration: One of the elements for a binding contract. Consideration is acceptance by the insurance company of the payment of the premium and the statement made by the prospective policyholder in the application.<br />• Consideration Clause: The clause that stipulates the basis on which the company issues the insurance contract. In health policies, the consideration is usually the statements in the application and the payment of premium.<br />• Consequential Loss: Financial loss occurring as the consequence of some other loss. Often called an indirect loss.<br />• Contents Broad Form: See Homeowners 4 policy.<br />• Contingent Liability: Liability arising out of work done by independent contractors for a firm. A firm may be liable for the work done by an independent contractor if the activity is illegal, the situation does not permit delegation of authority, or the work is inherently dangerous.<br />• Contract: A binding agreement between two or more parties for the doing or not doing of certain things. A contract of insurance is embodied in a written document called the policy.<br />• Contractual Liability: Legal liability of another party that the business firm agrees to assume by a written or oral contract.<br />• Contribution by Equal Shares: Type of other insurance provision often found in liability insurance contracts that requires each company to share equally in the loss until the share of each insurer equals the lowest limit of liability under any policy or until the full amount of loss is paid.<br />• Contributory: A group insurance plan issued to an employer under which both the employer and employee contribute to the cost of the plan. Seventyfive percent of the eligible employees must be insured. (See Noncontributory.)<br />• Contributory Negligence: Negligence of the damaged person that helped to cause the accident. Some states bar recovery to the plaintiff if the plaintiff was contributory negligent to any extent. Others apply comparative negligence.<br />• Convertible Bond: A bond that offers the holder the privilege of converting the bond into a specified number of shares of stock.<br />• Cost Basis: An amount attributed to an asset for income tax purposes; used to determine gain or loss on sale or transfer; used to determine the value of a gift<br />• Coverage: The scope of protection provided under a contract of insurance; any of several risks covered by a policy.<br />• Coverage for Damage to Your Auto: That part of the personal auto policy insuring payment for damage or theft of the insured automobile. This optional coverage can be used to insure both collision and otherthancollision losses.<br />• Covered: A person covered by a pension plan is one who has fulfilled the eligibility requirements in the plan, for whom benefits have accrued, or are accruing, or who is receiving benefits under the plan.<br />• CPCU: See Chartered Property and Casualty Underwriter.<br />• Credibility: A statistical measure of the degree to which past results make good forecasts of future results.<br />• Credibility Factor The weight given to an individual insured's past experience in computing premiums for future coverage.<br />• Credit Insurance: A guarantee to manufacturers, wholesalers, and service organizations that they will be paid for goods shipped or services rendered. Applies to that part of working capital which is represented by accounts receivable.<br />• Crophail Insurance: Protection against damage to growing crops as a result of hail or certain other named perils.<br />• Cross Purchase Agreement: specifies the terms for the surviving partners or shareholders to buy a deceased's share of the business's ownership.<br />• CSR: Customer service representatives support the work of insurance agents with a variety of tasks that must be done within a company or agency to deliver services to and handle requests from clients.<br />• Currently Insured: Status of a covered person under the Oldage, survivors, and Disability Insurance (OASDI) program who has at least six quarters of coverage out of the last thirteen quarters, ending with the quarter of death, disability, or entitlement to retirement benefits.<br />D<br />• Damage to Property of Others: Damage covered up to $500 per occurrence for an insured who damages another's property. Payment is made despite the lack of legal liability. Coverage is included in Section II of the homeowners policy.<br />• Debenture: A bond that is backed only by the general credit of the issuing corporation. No specific property is pledged as security behind the loan.<br />• Declarations: Statements in an insurance contract that provide information about the property or life to be insured and used for underwriting and rating purposes and identification of the property or life to be insured.<br />• Declination: The insurer's refusal to insure an individual after careful evaluation of the application for insurance and any other pertinent factors.<br />• Deductible: An amount which a policyholder agrees to pay, per claim or per accident, toward the total amount of an insured loss.<br />• Dental Insurance: Individual or group plan that helps pay costs of normal dental care as well as damage to teeth from an accident.<br />• Dependent Benefits: Social Security benefits available to the spouse or children of a Social Security beneficiary.<br />• Deposit Premium: The premium deposit paid by a prospective policy holder when an application is made for an insurance policy. It is usually equal, at least, to the first month's estimate premium and is applied toward the actual premium when billed.<br />• Depreciation: A decrease in the value of property over a period of time due to wear and tear or obsolescence. Depreciation is used to determine the actual cash value of property at time of loss. (See Actual Cash Value)<br />• Difference in Conditions Insurance (DIC): "Allrisks" policy that covers other perils not insured by basic property insurance contracts, supplemental to and excluding the coverage provided by underlying contracts.<br />• Direct Loss: Financial loss that results directly from an insured peril.<br />• Direct Placement: Sale of an entire issue of bonds or stock by the issuer to one or a few large institution customers such as an insurance company without trying to market the issue publicly.<br />• Direct Premiums Written: Property and casualty insurance premiums written (less return premiums), without any allowance for premiums for assumed or ceded reinsurance.<br />• Direct Response System: A marketing method where insurance is sold without the services of an agent. Potential customers are solicited by advertising in the mail, newspapers, magazines, television, radio, and other media.<br />• Direct Writer: The industry term for a company which uses its own sales employees to write its policies. Sometimes refers to companies which contract with exclusive agents.<br />• Directors' and Officers' Liability: the exposure of corporate managers to claims from shareholders, government agencies, and employees, and others alleging mismanagement.<br />• Disability: a physical or a mental impairment that substantially limits one or more major life activities of an individual. It may be partial or total. (See Partial Disability; Total Disability.)<br />• Disability Benefit: Periodic payments, usually monthly, payable to participants under some retirement plans, if such participants are eligible for the benefits and become totally and permanently disabled prior to the normal retirement date.<br />• Disability Income Insurance: A form of health insurance that provides periodic payments to replace income when an insured person is unable to work as a result of illness, injury, or disease.<br />• Disappearing Deductible: Deductible in an insurance contract that provides for a decreasing deductible amount as the size of the loss increases, so that small claims are not paid but large losses are paid in full.<br />• Dismemberment: Loss of body members (limbs), or use thereof, or loss of sight due to injury.<br />Dictionary of Insurance Terms<br />A<br />• Absolute Liability: Liability for damages even though fault or negligence cannot be proven.<br />• Accident: An event or occurrence which is unforeseen and unintended.<br />• Accidental Bodily Injury: Injury to the body as the result of an accident.<br />• Accounting: The process of recording, summarizing, and allocating all items of income and expense of the company and analyzing, verifying, and reporting the results.<br />• Act of God: A flood, earthquake or other nonpreventable accident resulting from natural causes that occur without any human intervention.<br />• Activities of Daily Living: A list of activities, normally including mobility, dressing, bathing, toileting, transferring, and eating which are used to assess degree of impairment and determine eligibility for some types of insurance benefits.<br />• Actual Cash Value (ACV): 1) The cost of replacing or restoring property at prices prevailing at the time and place of the loss, less depreciation, however caused; 2) replacement cost minus depreciation.<br />• Actuarially Fair: The price for insurance which exactly represents the expected losses<br />• Actuary: A person professionally trained in the technical aspects of pensions, insurance and related fields. The actuary estimates how much money must be contributed to an insurance or pension fund in order to provide future<br />• Additional insured: A person, company or entity protected by an insurance policy in addition to the insured.<br />• Adjuster: A person who investigates and settles losses for an insurance carrier.<br />• Adjusting: The process of investigating and settling losses with or by an insurance carrier.<br />• Adjustment Bureau: Organization for adjusting insurance claims that is supported by insurers using the bureau's services.<br />• Administrative Services Only (AS0) Plan: An arrangement under which an insurance carrier or an independent organization will, for a fee, handle the administration of claims, benefits and other administrative functions for a selfinsured group.<br />• Advance Premium Mutual: Mutual insurance company owned by the policy owners that does not issue assessable policies but charges premiums expected to be sufficient to pay all claims and expenses.<br />• Adverse Selection: The tendency of persons who present a poorerthanaverage risk to apply for, or continue, insurance to a greater extent than do persons with average or betterthanaverage expectations of loss.<br />• Age Limits: Stipulated minimum and maximum ages below and above which the company will not accept applications or may not renew policies.<br />• Agent: An insurance company representative licensed by the state who solicits, negotiates or effects contracts of insurance, and provides service to the policyholder for the insurer.<br />• Aggregate Deductible: Deductible in some property and health insurance contracts in which all covered losses during a year are added together and the insurer pays only when the aggregate deductible amount is exceeded.<br />• Aggregate Indemnity: The maximum dollar amount that may be collected for any disability or period of disability under the policy.<br />• Alien Insurer: An insurance company domiciled in another country.<br />• Allied Lines: A term for forms of property insurance allied with fire insurance, covering such perils as windstorm, hail, explosion, and riot.<br />• Allocated Benefits: Benefits for which the maximum amount payable for specific services is itemized in the contract.<br />• Allrisks Policy: Coverage by an insurance contract that promises to cover all losses except those losses specifically excluded in the policy. See also: Risks of direct loss to property.<br />• Amendment: A formal document changing the provisions of an insurance policy signed jointly by the insurance company officer and the policy holder or his authorized representative.<br />• Amortization: Paying an interestbearing liability by gradual reduction through a series of installments, as opposed to one lumpsum payment.<br />• Annual Statement: The annual report, as of December 31, of an insurer to a state insurance department, showing assets and liabilities, receipts and disbursements, and other financial data.<br />• Application: A signed statement of facts made by a person applying for life insurance and then used by the insurance company to decide whether or not to issue a policy. The application becomes part of the insurance contract when the policy is issued.<br />• Arbitration: Arbitration: A form of alternative dispute resolution where an unbiased person or panel renders an opinion as to responsibility for or extent of a loss.<br />• Arson: The willful and malicious burning of, or attempt to burn, any structure or other property, often with criminal or fraudulent intent.<br />• Assessment Association: An insurer that does not charge a fixed premium for insurance, but rather assesses its members periodically to pay its losses. Assessment insurers usually collect an advance premium which is estimated to cover losses and expenses, but reserve the right to make additional assessments whenever the premium collected is insufficient.<br />• Assessment Mutual: Mutual insurance company that has the right to assess policy owners for losses and expenses.<br />• Assets: All funds, property, goods, securities, rights of action, or resources of any kind owned by an insurance company. Statutory accounting, however, excludes nonadmitted assets, such as deferred or overdue premiums, that would be considered assets under generally accepted accounting principles (GAAP).<br />• Assignment: The legal transfer of one person's interest in an insurance policy to another person.<br />• Association Captive: Type of captive insurer owned by members of a sponsoring organization or group, such as a trade association.<br />• Association Group: A group formed from members of a trade or a professional association for group insurance under one master health insurance contract.<br />• Assumption of Risk Doctrine: Defense against a negligence claim that bars recovery for damages if a person understands and recognizes the danger inherent in a particular activity or occupation.<br />• Attractive Nuisance: Condition that can attract and injure children. Occupants of land on which such a condition exists are liable for injuries to children.<br />• Automatic Reinsurance: An agreement that the insurer must cede and the reinsurer must accept all risks within certain explicitly defined limits. The reinsurer undertakes in advance to grant reinsurance to the extent specified in the agreement in every case where the ceding company accepts the application and retains its own limit.<br />• Automobile Insurance Plan: One of several types of "shared market" mechanisms where persons who are unable to obtain such insurance in the voluntary market are assigned to a particular company, usually at a higher rate than the voluntary market. Formerly called "Assigned Risk."<br />• Automobile Liability Insurance: Protection for the insured against financial loss because of legal liability for carrelated injuries to others or damage to their property.<br />• Automobile Physical Damage Insurance: Coverage to pay for damage to or loss of an insured automobile resulting from collision, fire, theft, or other perils.<br />• Automobile Reinsurance Facility: One of several types of "shared market" mechanisms used to make automobile insurance available to persons who are unable to obtain such insurance in the regular market.<br />• Aviation Insurance: Aircraft insurance including coverage of aircraft or their contents, the owner's liability, and accident insurance on the passengers. Beneficiary: The person designated or provided for by the policy terms to receive any benefits provided by the policy or plan upon the death of the insured.<br />• Average Indexed Monthly Earnings (AIME): Under the OASDI program, the person's actual earnings are indexed to determine his or her primary insurance amount (PIA).<br />• Avoidance: see Loss Avoidance.<br />B<br />• Bailees Customers Policy: Policy that covers the loss or damage to property of customers regardless of a bailee's legal liability.<br />• Basic Form: see Dwelling Property 1.<br />• Basis: An amount attributed to an asset for income tax purposes; used to determine gain or loss on sale or transfer; used to determine the value of a gift.<br />• Benefits: The amount payable by the insurance company to a claimant, assignee or beneficiary under each coverage.<br />• Binder: A written or oral contract issued temporarily to place insurance in force when it is not possible to issue a new policy or endorse the existing policy immediately. A binder is subject to the premium and all the terms of the policy to be issued.<br />• Binding Receipt: A receipt given for a premium payment accompanying the application for insurance. If the policy is approved, this binds the company to make the policy effective from the date of the receipt.<br />• Blanket Medical Expense: A provision which entitles the insured person to collect up to a maximum established in the policy for all hospital and medical expenses incurred, without any limitations on individual types of medical expenses.<br />• Boat Owners Package Policy: A special package policy for boat owners that combines physical damage insurance, medical expense insurance, liability insurance, and other coverage's in one contract.<br />• Boiler and Machinery Insurance: Coverage for loss arising out of the operation of pressure, mechanical, and electrical equipment. It covers loss of the boiler and machinery itself, damage to other property, and business interruption losses.<br />• Bond: A certificate issued by a government or corporation as evidence of a debt. The issuer of the bond promises to pay the bondholder a specified amount of interest for a specified period and to repay the loan on the expiration (maturity) date.<br />• Book of Business: the number, size and type of accounts (policyholders) that an agent "owns."<br />• Branch Office System: Type of life insurance marketing system under which branch offices are established in various areas. Salaried branch managers, who are employees of the company, are responsible for hiring and training new agents.<br />• Break in Service: A calendar year, plan year or other consecutive 12month period designated by the plan during which a plan participant does not complete more than 500 hours of service.<br />• Broad Form: see Dwelling Property 2; Homeowners 2 Policy.<br />• Broker: A marketing specialist who represents buyers of property and liability insurance and who deals with either agents or companies in arranging for the coverage required by the customer.<br />• Burglary: Breaking and entering into another person's property with felonious intent.<br />• Burglary and Theft Insurance: Coverage against property losses due to burglary, robbery, or larceny.<br />• Business Insurance: A policy which primarily provides coverage of benefits to a business as contrasted to an individual. It is issued to indemnify a business for the loss of services of a key employee or a partner who becomes disabled.<br />• Business Interruption Insurance: Protection for a business owner against losses resulting from a temporary shutdown because of fire or other insured peril. The insurance provides reimbursement for lost net profits and necessary continuing expenses.<br />• Business Life Insurance: Life insurance purchased by a business enterprise on the life of a member of the firm. It is often bought by partnerships to protect the surviving partners against loss caused by the death of a partner, or by a corporation to reimburse it for loss caused by the death of a key employee.<br />• BuySell Agreement: An agreement made by the owners of a business to purchase the share of a disabled or deceased owner. The value of each owner's share of the business and the exact terms of the buyingandselling process are established before death or the beginning of disability.<br />C<br />• Cancellation: The discontinuance of an insurance policy before its normal expiration date, either by the insured or the company.<br />• Capacity: The amount of capital available to an insurance company or to the industry as a whole for underwriting general insurance coverage or coverage for specific perils.<br />• Capital Gain: Profit realized on the sale of securities. An unrealized capital gain is an increase in the value of securities that have not been sold.<br />• Capital Retention Approach: A method used to estimate the amount of life insurance to own. Under this method, the insurance proceeds are retained and are not liquidated.<br />• Captive Insurance Company: A company owned solely or in large part by one or more noninsurance entities for the primary purpose of providing insurance coverage to the owner or owners.<br />• Captive Insurer: Insurance company established and owned by a parent firm in order to insure its loss exposures while reducing premium costs, providing easier access to a reinsurer, and perhaps easing tax burdens. See also Association captive; Pure captive.<br />• Cargo Insurance: Type of ocean marine insurance that protects the shipper of the goods against financial loss if the goods are damaged or lost.<br />• Casualty Insurance: Insurance concerned with the insider's legal liability for injuries to others or damage to other persons' property; also encompasses such forms of insurance as plate glass, burglary, robbery and workers' compensation.<br />• Catastrophe: Event which causes a loss of extraordinary magnitude, such as a hurricane or tornado.<br />• Causesofloss Form: Form added to commercial property insurance policy that indicates the causes of loss that are covered. There are four causesofloss forms: basic, broad, special, and earthquake.<br />• Cede: To transfer all or part of a risk written by an insurer (the ceding, or primary company) to a reinsurer.<br />• Certificate of Insurance: A statement of coverage issued to an individual insured under a group insurance contract, outlining the insurance benefits and principal provisions applicable to the member.<br />• Certified Financial Planner (CFP): Professional who has attained a high degree of technical competency in financial planning and has passed a series of professional examinations by the College of Financial Planning.<br />• Certified Insurance Counselor (CIC): Professional in property and liability insurance who has passed a series of examinations by the Society of Certified Insurance Counselors.<br />• Cession: Amount of the insurance ceded to a reinsurer by the original insuring company in a reinsurance operation.<br />• Chartered Life<br />• Chartered Property and Casualty Underwriter (CPCU): Professional who has attained a high degree of technical competency in property and liability insurance and has passed ten professional examinations administered by the American Institute for Property and Liability Underwriters.<br />• Choice nofault: Allows auto insureds the choice of remaining under the tort system or choosing nofault at a reduced premium.<br />• Claim: A request for payment of a loss which may come under the terms of an insurance contract.<br />• Claims Adjustor: Person who settles claims: an agent, company adjustor, independent adjustor, adjustment bureau, or public adjustor.<br />• Claimmade policy: A liability insurance policy under which coverage applies to claims filed during the policy period.<br />• Class Rating: Ratemaking method in which similar insureds are placed in the same underwriting class and each is charged the same rate. Also called manual rating.<br />• Coinsurance: 1) A provision under which an insured who carries less than the stipulated percentage of insurance to value, will receive a loss payment that is limited to the same ratio which the amount of insurance bears to the amount required; 2) a policy provision frequently found in medical insurance, by which the insured person and the insurer share the covered losses under a policy in a specified ratio, i.e., 80 percent by the insurer and 20 percent by the insured.<br />• Collateral Source Rule: Under this rule, the defendant cannot introduce any evidence that shows the injured party has received compensation from other collateral sources.<br />• Collision Insurance: Protection against loss resulting from any damage to the policyholder's car caused by collision with another vehicle or object, or by upset of the insured car, whether it was the insured's fault or not.<br />• Combined Ratio: Basically, a measure of the relationship between dollars spent for claims and expenses and premium dollars taken in; more specifically, the sum of the ratio of losses incurred to premiums earned and the ratio of commissions and expenses incurred to premiums written. A ratio above 100 means that for every premium dollar taken in, more than a dollar went for losses, expenses, and commissions.<br />• Commercial General Liability Policy (CGL): Commercial liability policy drafted by the Insurance Services Office containing two coverage forms, an occurrence form and a claimsmade form.<br />• Commercial Lines: Insurance for businesses, organizations, institutions, governmental agencies, and other commercial establishments.<br />• Commercial Multiple Peril Policy: A package of insurance that includes a wide range of essential coverages for the commercial establishment.<br />• Commercial Package Policy (CPP): A commercial policy that can be designed to meet the specific insurance needs of business firms. Property and liability coverage forms are combined to form a single policy.<br />• Commission: The part of an insurance premium paid by the insurer to an agent or broker for his services in procuring and servicing the insurance.<br />• Commissioner: A state officer who administers the state's insurance laws and regulations. In some states, this regulator is called the director or superintendent of insurance.<br />• Common Stock: Securities that represent an ownership interest in a corporation.<br />• Community Property: A special ownership form requiring that one half of all property earned by a husband or wife during marriage belongs to each. Community property laws do not generally apply to property acquired by gift, by will, or by descent.<br />• Company Adjuster: Claims adjuster who is a salaried employee representing only one company.<br />• Comparative Negligence: Under this concept a plaintiff (the person bringing suit) may recover damages even though guilty of some negligence. His or her recovery, however, is reduced by the amount or percent of that negligence.<br />• Completed Operations: Liability arising out of faulty work performed away from the premises after the work or operations are completed. Applicable to contractors, plumbers, electricians, repair shops, and similar firms.<br />• Comprehensive Automobile Insurance: Protection against loss resulting from damage to the insured auto, other than loss by collision or upset.<br />• Comprehensive<br />• Comprehensive Personal Liability Insurance: Protection against loss arising out of legal liability to pay money for damage or injury to others for which the insured is responsible. It does not include automobile or business operation liabilities.<br />• Compulsory Auto Liability Insurance: Insurance laws in some states required motorists to carry at least certain minimum auto coverages. This is called "compulsory" insurance.<br />• Compulsory Insurance: Any form of insurance which is required by law.<br />• Compulsory Insurance Law: Law protecting accident victims against irresponsible motorists by requiring owners and operators of automobiles to carry certain amounts of liability insurance in order to license the vehicle and drive legally within the state.<br />• Concealment: Deliberate failure of an applicant for insurance to reveal a material fact to the insurer.<br />• Concurrent Causation: Legal doctrine that states when a property loss is due to two causes, one that is excluded and one that is covered, the policy provides coverage.<br />• Conditional Receipt: A receipt given for premium payments accompanying an application for insurance. If the application is approved as applied for, the coverage is effective as of the date of the prepayment or the date on which the last of the underwriting requirements, such as a medical examination, has been fulfilled.<br />• Conditions: Provisions inserted in an insurance contract that qualify or place limitations on the insurer's promise to perform.<br />• Conservation: The attempt by the insurer to prevent the lapse of a policy.<br />• Consideration: One of the elements for a binding contract. Consideration is acceptance by the insurance company of the payment of the premium and the statement made by the prospective policyholder in the application.<br />• Consideration Clause: The clause that stipulates the basis on which the company issues the insurance contract. In health policies, the consideration is usually the statements in the application and the payment of premium.<br />• Consequential Loss: Financial loss occurring as the consequence of some other loss. Often called an indirect loss.<br />• Contents Broad Form: See Homeowners 4 policy.<br />• Contingent Liability: Liability arising out of work done by independent contractors for a firm. A firm may be liable for the work done by an independent contractor if the activity is illegal, the situation does not permit delegation of authority, or the work is inherently dangerous.<br />• Contract: A binding agreement between two or more parties for the doing or not doing of certain things. A contract of insurance is embodied in a written document called the policy.<br />• Contractual Liability: Legal liability of another party that the business firm agrees to assume by a written or oral contract.<br />• Contribution by Equal Shares: Type of other insurance provision often found in liability insurance contracts that requires each company to share equally in the loss until the share of each insurer equals the lowest limit of liability under any policy or until the full amount of loss is paid.<br />• Contributory: A group insurance plan issued to an employer under which both the employer and employee contribute to the cost of the plan. Seventyfive percent of the eligible employees must be insured. (See Noncontributory.)<br />• Contributory Negligence: Negligence of the damaged person that helped to cause the accident. Some states bar recovery to the plaintiff if the plaintiff was contributory negligent to any extent. Others apply comparative negligence.<br />• Convertible Bond: A bond that offers the holder the privilege of converting the bond into a specified number of shares of stock.<br />• Cost Basis: An amount attributed to an asset for income tax purposes; used to determine gain or loss on sale or transfer; used to determine the value of a gift<br />• Coverage: The scope of protection provided under a contract of insurance; any of several risks covered by a policy.<br />• Coverage for Damage to Your Auto: That part of the personal auto policy insuring payment for damage or theft of the insured automobile. This optional coverage can be used to insure both collision and otherthancollision losses.<br />• Covered: A person covered by a pension plan is one who has fulfilled the eligibility requirements in the plan, for whom benefits have accrued, or are accruing, or who is receiving benefits under the plan.<br />• CPCU: See Chartered Property and Casualty Underwriter.<br />• Credibility: A statistical measure of the degree to which past results make good forecasts of future results.<br />• Credibility Factor The weight given to an individual insured's past experience in computing premiums for future coverage.<br />• Credit Insurance: A guarantee to manufacturers, wholesalers, and service organizations that they will be paid for goods shipped or services rendered. Applies to that part of working capital which is represented by accounts receivable.<br />• Crophail Insurance: Protection against damage to growing crops as a result of hail or certain other named perils.<br />• Cross Purchase Agreement: specifies the terms for the surviving partners or shareholders to buy a deceased's share of the business's ownership.<br />• CSR: Customer service representatives support the work of insurance agents with a variety of tasks that must be done within a company or agency to deliver services to and handle requests from clients.<br />• Currently Insured: Status of a covered person under the Oldage, survivors, and Disability Insurance (OASDI) program who has at least six quarters of coverage out of the last thirteen quarters, ending with the quarter of death, disability, or entitlement to retirement benefits.<br />D<br />• Damage to Property of Others: Damage covered up to $500 per occurrence for an insured who damages another's property. Payment is made despite the lack of legal liability. Coverage is included in Section II of the homeowners policy.<br />• Debenture: A bond that is backed only by the general credit of the issuing corporation. No specific property is pledged as security behind the loan.<br />• Declarations: Statements in an insurance contract that provide information about the property or life to be insured and used for underwriting and rating purposes and identification of the property or life to be insured.<br />• Declination: The insurer's refusal to insure an individual after careful evaluation of the application for insurance and any other pertinent factors.<br />• Deductible: An amount which a policyholder agrees to pay, per claim or per accident, toward the total amount of an insured loss.<br />• Dental Insurance: Individual or group plan that helps pay costs of normal dental care as well as damage to teeth from an accident.<br />• Dependent Benefits: Social Security benefits available to the spouse or children of a Social Security beneficiary.<br />• Deposit Premium: The premium deposit paid by a prospective policy holder when an application is made for an insurance policy. It is usually equal, at least, to the first month's estimate premium and is applied toward the actual premium when billed.<br />• Depreciation: A decrease in the value of property over a period of time due to wear and tear or obsolescence. Depreciation is used to determine the actual cash value of property at time of loss. (See Actual Cash Value)<br />• Difference in Conditions Insurance (DIC): "Allrisks" policy that covers other perils not insured by basic property insurance contracts, supplemental to and excluding the coverage provided by underlying contracts.<br />• Direct Loss: Financial loss that results directly from an insured peril.<br />• Direct Placement: Sale of an entire issue of bonds or stock by the issuer to one or a few large institution customers such as an insurance company without trying to market the issue publicly.<br />• Direct Premiums Written: Property and casualty insurance premiums written (less return premiums), without any allowance for premiums for assumed or ceded reinsurance.<br />• Direct Response System: A marketing method where insurance is sold without the services of an agent. Potential customers are solicited by advertising in the mail, newspapers, magazines, television, radio, and other media.<br />• Direct Writer: The industry term for a company which uses its own sales employees to write its policies. Sometimes refers to companies which contract with exclusive agents.<br />• Directors' and Officers' Liability: the exposure of corporate managers to claims from shareholders, government agencies, and employees, and others alleging mismanagement.<br />• Disability: a physical or a mental impairment that substantially limits one or more major life activities of an individual. It may be partial or total. (See Partial Disability; Total Disability.)<br />• Disability Benefit: Periodic payments, usually monthly, payable to participants under some retirement plans, if such participants are eligible for the benefits and become totally and permanently disabled prior to the normal retirement date.<br />• Disability Income Insurance: A form of health insurance that provides periodic payments to replace income when an insured person is unable to work as a result of illness, injury, or disease.<br />• Disappearing Deductible: Deductible in an insurance contract that provides for a decreasing deductible amount as the size of the loss increases, so that small claims are not paid but large losses are paid in full.<br />• Dismemberment: Loss of body members (limbs), or use thereof, or loss of sight due to injury.<br /><br /></span><span style="font-family:arial;"></span>Rajesh Babu Rajamanickamhttp://www.blogger.com/profile/16036650026261051481noreply@blogger.com0tag:blogger.com,1999:blog-8180223062941193498.post-1354421905873049862010-09-27T17:11:00.001-07:002010-09-27T17:11:23.623-07:00Team EthicsThe following six attributes of the team are associated with ethical team behavior:<br /> Customer relations that are truthful and fair to all parties<br /> Protecting company property<br /> Compliance with company policies<br /> Integrity of information<br /> Attendance<br /> Redefine standards of qualityRajesh Babu Rajamanickamhttp://www.blogger.com/profile/16036650026261051481noreply@blogger.com0tag:blogger.com,1999:blog-8180223062941193498.post-77935605085923395102010-09-27T17:10:00.000-07:002010-09-27T17:11:03.952-07:00Team Member Interaction1. Know communication and work preference styles of staff and assure that the team<br />complements those communication and work preference styles.<br />2. Set clear, measurable work requirement standards.<br />3. Delegate authority to staff members that empowers them to perform the tasks in the<br />manner they deem most effective and efficient.<br />4. Exact responsibility and accountability for team members for completing their work<br />tasks in an effective efficient manner with high quality work products.<br />5. Give immediate and objective feedback to team members on the performance of<br />their individual and team tasks.<br />6. Communicate, communicate, and communicate with all team members about any<br />event that may impact team performance.Rajesh Babu Rajamanickamhttp://www.blogger.com/profile/16036650026261051481noreply@blogger.com0tag:blogger.com,1999:blog-8180223062941193498.post-10963765854578649902010-09-27T17:09:00.001-07:002010-09-27T17:09:54.126-07:00Requirements TracingA requirement is defined as the description of a condition or capability of a system.<br />Each requirement must be logical and testable. To ensure that all requirements have been<br />implemented and tested, they must be traceable. Each requirement must be mapped to a<br />test cases and test steps and defects.<br />Mercury Interactive’s Test Director does a good job of mapping the entire history and then<br />updating the status of the requirement as a defect is detected, re-tested or corrected. Other<br />tools like DOORS, or Caliber and IBM’s Rational Requisite Pro can also be used to log, track<br />and map requirements.<br />Example<br />If a project team is developing an object-oriented Internet application, the requirements or<br />stakeholder needs will be traced to use cases, activity diagrams, class diagrams and test<br />cases or scenarios in the analysis stage of the project. Reviews for these deliverables will<br />include a check of the traceability to ensure that all requirements are accounted for.<br />In the design stage of the project, the tracing will continue to design and test models. Again,<br />reviews for these deliverables will include a check for traceability to ensure that nothing has<br />been lost in the translation of analysis deliverables. Requirements mapping to system<br />components drives the test partitioning strategies. Test strategies evolve along with system<br />mapping. Test cases to be developed need to know where each part of a business rule is<br />mapped in the application architecture. For example, a business rule regarding a customer<br />phone number may be implemented on the client side as a GUI field edit for high<br />performance order entry. In another it may be implemented as a stored procedure on the<br />data server so the rule can be enforced across applications.<br />When the system is implemented, test cases or scenarios will be executed to prove that the<br />requirements were implemented in the application. Tools can be used throughout the project<br />to help manage requirements and track the implementation status of each one.Rajesh Babu Rajamanickamhttp://www.blogger.com/profile/16036650026261051481noreply@blogger.com0tag:blogger.com,1999:blog-8180223062941193498.post-7043800721005724142010-09-27T17:04:00.000-07:002010-09-27T17:08:48.618-07:00The “V” Concept of TestingLife cycle testing involves continuous testing of the system during the developmental process.<br />At predetermined points, the results of the development process are inspected to determine<br />the correctness of the implementation. These inspections identify defects at the earliest<br />possible point.<br />Life cycle testing cannot occur until a formalized SDLC has been incorporated. Life cycle<br />testing is dependent upon the completion of predetermined deliverables at specified points<br />in the developmental life cycle. If information services personnel have the discretion to<br />determine the order in which deliverables are developed, the life cycle test process<br />becomes ineffective. This is due to variability in the process, which normally increases cost.<br />The life cycle testing concept can best be accomplished by the formation of a test team. The<br />team is comprised of members of the project who may be both implementing and testing<br />the system. When members of the team are testing the system, they must use a formal testing<br />methodology to clearly distinguish the implementation mode from the test mode. They also<br />must follow a structured methodology when approaching testing the same as when<br />approaching system development. Without a specific structured test methodology, the test<br />team concept is ineffective because team members would follow the same methodology for<br />testing as they used for developing the system. Experience shows people are blind to their<br />own mistakes, so the effectiveness of the test team is dependent upon developing the system<br />under one methodology and testing it under another.<br />The life cycle testing concept is illustrated below. This illustration shows that when the project<br />starts both the system development process and system test process begins. The team that is<br />developing the system begins the systems development process and the team that is<br />conducting the system test begins planning the system test process. Both teams start at the<br />same point using the same information. The systems development team has the responsibility<br />to define and document the requirements for developmental purposes. The test team will<br />likewise use those same requirements, but for the purpose of testing the system. At<br />appropriate points during the developmental process, the test team will test the<br />developmental process in an attempt to uncover defects. The test team should use the<br />structured testing techniques outlined in this book as a basis of evaluating the system<br />development process deliverables.Rajesh Babu Rajamanickamhttp://www.blogger.com/profile/16036650026261051481noreply@blogger.com0tag:blogger.com,1999:blog-8180223062941193498.post-89691904225217922492008-01-14T17:13:00.001-08:002008-01-14T17:13:56.957-08:00Mercury QuickTest Professional- Features - Benefits<span style="font-weight:bold;">Mercury QuickTest Professional- <br />Features & Benefits</span><br />• Ensure immediate return on investment through industry-leading ease of use and pre-configured environment support. <br />• Operate stand-alone, or integrated into Mercury Business Process Testing and Mercury Quality Center. <br />• Introduce next-generation "zero-configuration" Keyword Driven testing technology in QuickTest Professional — allowing for fast test creation, easier maintenance, and more powerful data-driving capability. <br />• Promote collaboration and sharing of test assets among testing groups through enterprise class object repository. <br />• Identify objects with Unique Smart Object Recognition, even if they change from build to build, enabling reliable unattended script execution. <br />• Manage multiple object repositories with ease to facilitate the building of automation frameworks and libraries. <br />• Handle unforeseen application events with Recovery Manager, facilitating 24x7 testing to meet test project deadlines. <br />• Reduce time to resolve defects by automatically reproducing defects and identify problems with the built-in test execution recorder. <br />• Collapse test documentation and test creation to a single step with Auto-documentation technology. <br />• Easily data-drive any object definition, method, checkpoint, and output value via the Integrated Data Table. <br />• Provide a complete IDE environment for QA engineers. <br />• Preserve your investments in Mercury WinRunner by leveraging existing test scripts written in the Test Scripting Language (TSL) with assets from the QuickTest Professional/WinRunner integration. <br />• Provide detailed, step by step report – now with video. <br />• Enable thorough validation of applications through a full complement of checkpoints. <br />Mercury QuickTest Professional _ How it Works<br />Mercury QuickTest Professional™ allows even novice testers to be productive in minutes. You can create a test script by simply pressing a Record button and using an application to perform a typical business process. Each step in the business process is automated documented with a plain-English sentence and screen shot. Users can easily modify, remove, or rearrange test steps in the Keyword View. <br />QuickTest Professional can automatically introduce checkpoints to verify application properties and functionality, for example to validate output or check link validity. For each step in the Keyword View, there is an ActiveScreen showing exactly how the application under test looked at that step. You can also add several types of checkpoints for any object to verify that components behave as expected, simply by clicking on that object in the ActiveScreen. <br />You can then enter test data into the Data Table, an integrated spreadsheet with the full functionality of Excel, to manipulate data sets and create multiple test iterations, without programming, to expand test case coverage. Data can be typed in or imported from databases, spreadsheets, or text files. <br />Advanced testers can view and edit their test scripts in the Expert View, which reveals the underlying industry-standard VBScript that QuickTest Professional automatically generates. Any changes made in the Expert View are automatically synchronized with the Keyword View. <br />Once a tester has run a script, a TestFusion report displays all aspects of the test run: a high-level results overview, an expandable Tree View of the test script specifying exactly where application failures occurred, the test data used, application screen shots for every step that highlight any discrepancies, and detailed explanations of each checkpoint pass and failure. By combining TestFusion reports with Mercury TestDirector, you can share reports across an entire QA and development team. <br />QuickTest Professional also facilitates the update process. As an application under test changes, such as when a “Login” button is renamed “Sign In,” you can make one update to the Shared Object Repository, and the update will propagate to all scripts that reference this object. You can publish test scripts to Mercury TestDirector, enabling other QA team members to reuse your test scripts, eliminating duplicative work. <br />QuickTest Professional supports functional testing of all popular environments, including Windows, Web, .Net, Visual Basic, ActiveX, Java, SAP, Siebel, Oracle, PeopleSoft, and terminal emulators. <br />What is the Diff between Image check-point and Bit map Check point?<br />Image checkpoints enable you to check the properties of a Web image. You can check an area of a Web page or application as a bitmap. While creating a test or component, you specify the area you want to check by selecting an object. You can check an entire object or any area within an object. QuickTest captures the specified object as a bitmap, and inserts a checkpoint in the test or component. You can also choose to save only the selected area of the object with your test or component in order to save disk Space For example, suppose you have a Web site that can display a map of a city the user specifies. The map has control keys for zooming. You can record the new map that is displayed after one click on the control key that zooms in the map. Using the bitmap checkpoint, you can check that the map zooms in correctly.<br /><br />You can create bitmap checkpoints for all supported testing environments (as long as the appropriate add-ins are loaded).<br />Note: The results of bitmap checkpoints may be affected by factors such as operating system, screen resolution, and color settings.Rajesh Babu Rajamanickamhttp://www.blogger.com/profile/16036650026261051481noreply@blogger.com1tag:blogger.com,1999:blog-8180223062941193498.post-38288875717369586732008-01-14T17:11:00.000-08:002008-01-14T17:12:24.660-08:00Testing Mistakes________________<br /> <span style="font-weight:bold;">Testing Mistakes</span><br /><br /><br />It's easy to make mistakes when testing software or planning a testing effort. Some<br />mistakes are made so often, so repeatedly, by so many different people, that they deserve<br />the label Classic Mistake.<br />Classic mistakes cluster usefully into five groups, which I’ve called “themes”:<br />· The Role of Testing: who does the testing team serve, and how does it do that?<br />· Planning the Testing Effort: how should the whole team’s work be organized?<br />· Personnel Issues: who should test?<br />· The Tester at Work: designing, writing, and maintaining individual tests.<br />· Technology Rampant: quick technological fixes for hard problems.<br />I have two goals for this paper. First, it should identify the mistakes, put them in context,<br />describe why they’re mistakes, and suggest alternatives. Because the context of one<br />mistake is usually prior mistakes, the paper is written in a narrative style rather than as a<br />list that can be read in any order. Second, the paper should be a handy checklist of<br />mistakes. For that reason, the classic mistakes are printed in a larger bold font when they<br />appear in the text, and they’re also summarized at the end.<br />Although many of these mistakes apply to all types of software projects, my specific focus<br />is the testing of commercial software products, not custom software or software that is<br />safety critical or mission critical.<br />This paper is essentially a series of bug reports for the testing process. You may think<br />some of them are features, not bugs. You may disagree with the severities I assign. You<br />may want more information to help in debugging, or want to volunteer information of<br />your own. Any decent bug reporting system will treat the original bug report as the first<br />part of a conversation. So should it be with this paper. Therefore, see<br />http://www.stlabs.com/marick/classic.htm for an ongoing discussion of this topic.<br />Theme One: The Role of Testing<br />A first major mistake people make is thinking that the testing team is responsible<br />for assuring quality. This role, often assigned to the first testing team in an<br />organization, makes it the last defense, the barrier between the development team<br />(accused of producing bad quality) and the customer (who must be protected from them).<br />It’s characterized by a testing team (often called the “Quality Assurance Group”) that has<br />Classic Testing Mistakes<br />2<br />formal authority to prevent shipment of the product. That in itself is a disheartening task:<br />the testing team can’t improve quality, only enforce a minimal level. Worse, that authority<br />is usually more apparent than real. Discovering that, together with the perverse incentives<br />of telling developers that quality is someone else’s job, leads to testing teams and testers<br />who are disillusioned, cynical, and view themselves as victims. We’ve learned from<br />Deming and others that products are better and cheaper to produce when everyone, at<br />every stage in development, is responsible for the quality of their work ([Deming86],<br />[Ishikawa85]).<br />In practice, whatever the formal role, most organizations believe that the purpose of<br />testing is to find bugs. This is a less pernicious definition than the previous one, but<br />it’s missing a key word. When I talk to programmers and development managers about<br />testers, one key sentence keeps coming up: “Testers aren’t finding the important<br />bugs.” Sometimes that’s just griping, sometimes it’s because the programmers have a<br />skewed sense of what’s important, but I regret to say that all too often it’s valid criticism.<br />Too many bug reports from testers are minor or irrelevant, and too many important bugs<br />are missed.<br />What’s an important bug? Important to whom? To a first approximation, the answer must<br />be “to customers”. Almost everyone will nod their head upon hearing this definition, but<br />do they mean it? Here’s a test of your organization’s maturity. Suppose your product is a<br />system that accepts email requests for service. As soon as a request is received, it sends a<br />reply that says “your request of 5/12/97 was accepted and its reference ID is NIC-051297-<br />3”. A tester who sends in many requests per day finds she has difficulty keeping track of<br />which request goes with which ID. She wishes that the original request were appended to<br />the acknowledgement. Furthermore, she realizes that some customers will also generate<br />many requests per day, so would also appreciate this feature. Would she:<br />1. file a bug report documenting a usability problem, with the expectation that it will be<br />assigned a reasonably high priority (because the fix is clearly useful to everyone,<br />important to some users, and easy to do)?<br />2. file a bug report with the expectation that it will be assigned “enhancement request”<br />priority and disappear forever into the bug database?<br />3. file a bug report that yields a “works as designed” resolution code, perhaps with an<br />email “nastygram” from a programmer or the development manager?<br />4. not bother with a bug report because it would end up in cases (2) or (3)?<br />If usability problems are not considered valid bugs, your project defines the<br />testing task too narrowly. Testers are restricted to checking whether the product does<br />what was intended, not whether what was intended is useful. Customers do not care<br />about the distinction, and testers shouldn’t either.<br />Testers are often the only people in the organization who use the system as heavily as an<br />expert. They notice usability problems that experts will see. (Formal usability testing<br />almost invariably concentrates on novice users.) Expert customers often don’t report<br />Classic Testing Mistakes<br />3<br />usability problems, because they’ve been trained to know it’s not worth their time.<br />Instead, they wait (in vain, perhaps) for a more usable product and switch to it. Testers<br />can prevent that lost revenue.<br />While defining the purpose of testing as “finding bugs important to customers” is a step<br />forward, it’s more restrictive than I like. It means that there is no focus on an<br />estimate of quality (and on the quality of that estimate). Consider these two<br />situations for a product with five subsystems.<br />1. 100 bugs are found in subsystem 1 before release. (For simplicity, assume that all bugs<br />are of the highest priority.) No bugs are found in the other subsystems. After release,<br />no bugs are reported in subsystem 1, but 12 bugs are found in each of the other<br />subsystems.<br />2. Before release, 50 bugs are found in subsystem 1. 6 bugs are found in each of the<br />other subsystems. After release, 50 bugs are found in subsystem 1 and 6 bugs in each<br />of the other subsystems.<br />From the “find important bugs” standpoint, the first testing effort was superior. It found<br />100 bugs before release, whereas the second found only 74. But I think you can make a<br />strong case that the second effort is more useful in practical terms. Let me restate the two<br />situations in terms of what a test manager might say before release:<br />1. “We have tested subsystem 1 very thoroughly, and we believe we’ve found almost all<br />of the priority 1 bugs. Unfortunately, we don’t know anything about the bugginess of<br />the remaining five subsystems.”<br />2. “We’ve tested all subsystems moderately thoroughly. Subsystem 1 is still very buggy.<br />The other subsystems are about 1/10th as buggy, though we’re sure bugs remain.”<br />This is, admittedly, an extreme example, but it demonstrates an important point. The<br />project manager has a tough decision: would it be better to hold on to the product for<br />more work, or should it be shipped now? Many factors - all rough estimates of possible<br />futures - have to be weighed: Will a competitor beat us to release and tie up the market?<br />Will dropping an unfinished feature to make it into a particular magazine’s special “Java<br />Development Environments” issue cause us to suffer in the review? Will critical customer<br />X be more annoyed by a schedule slip or by a shaky product? Will the product be buggy<br />enough that profits will be eaten up by support costs or, worse, a recall? 1<br />The testing team will serve the project manager better if it concentrates first on providing<br />estimates of product bugginess (reducing uncertainty), then on finding more of the bugs<br />that are estimated to be there. That affects test planning, the topic of the next theme.<br />It also affects status reporting. Test managers often err by reporting bug data<br />without putting it into context. Without context, project management tends to<br />focus on one graph:<br />1 Notice how none of the decisions depend solely on the product’s bugginess. That’s another reason why giving the<br />testing manager “stop ship” authority is a bad idea. He or she simply doesn’t have enough information to use that<br />authority wisely. The project manager might not have enough either, but won’t have less.<br />Classic Testing Mistakes<br />4<br />Bug Status<br />0<br />20<br />40<br />60<br />80<br />100<br />120<br />1<br />3<br />5<br />7<br />9<br />Build<br />Count<br />Bugs found<br />Bugs fixed<br />The flattening in the curve of bugs found will be interpreted in the most optimistic possible<br />way unless you as test manager explain the limitations of the data:<br />· “Only half the planned testing tasks have been finished, so little is known about half<br />the areas in the project. There could soon be a big spike in the number of bugs<br />found.”<br />· “That’s especially likely because the last two weekly builds have been lightly tested.<br />I told the testers to take their vacations now, before the project hits crunch mode.”<br />· “Furthermore, based on previous projects with similar amounts and kinds of testing<br />effort, it’s reasonable to expect at least 45 priority-1 bugs remain undiscovered.<br />Historically, that’s pretty high for a successful product.”<br />For discussions of using bug data, see [Cusumano95], [Rothman96], and [Marick97].<br />Earlier I asserted that testers can’t directly improve quality; they can only measure it.<br />That’s true only if you find yourself starting testing too late. Tests designed before<br />coding begins can improve quality. They inform the developer of the kinds of tests that<br />will be run, including the special cases that will be checked. The developer can use that<br />information while thinking about the design, during design inspections, and in his own<br />developer testing.2<br />Early test design can do more than prevent coding bugs. As will be discussed in the next<br />theme, many tests will represent user tasks. The process of designing them can find user<br />interface and usability problems before expensive rework is required. I’ve found problems<br />like no user-visible place for error messages to go, pluggable modules that didn’t fit<br />2 One person who worked in a pathologically broken organization told me that they were given the acceptance test in<br />advance. They coded the program to recognize the test cases and return the correct answer, bypassing completely<br />the logic that was supposed to calculate the answer. Few companies are that bad, but you could argue that<br />programmers will tend to produce code “trained” for the tests. If the tests are good, that’s not a problem - the code<br />is also trained for the real customers. The biggest danger is that the programmers will interpret the tests as narrow<br />special cases, rather than handling the more general situation. That can be forestalled by writing the early test<br />designs in terms of general situations rather than specific inputs: “more than two columns per page” rather than<br />“three two-inch columns on an A4 page”. Also, the tests given to the programmers will likely be supplemented by<br />others designed later.<br />Classic Testing Mistakes<br />5<br />together, two screens that had to be used together but could not be displayed<br />simultaneously, and “obvious” functions that couldn’t be performed. Test design fits<br />nicely into any usability engineering effort ([Nielsen93]) as a way of finding specification<br />bugs.<br />I should note that involving testing early feels unnatural to many programmers and<br />development managers. There may be feelings that you are intruding on their turf or not<br />giving them the chance to make the mistakes that are an essential part of design. Take<br />care, especially at first, not to increase their workload or slow them down. It may take<br />one or two entire projects to establish your credibility and usefulness.<br />Theme Two: Planning the Testing Effort<br />I’ll first discuss specific planning mistakes, then relate test planning to the role of testing.<br />It’s not unusual to see test plans biased toward functional testing. In functional<br />testing, particular features are tested in isolation. In a word processor, all the options for<br />printing would be applied, one after the other. Editing options would later get their own<br />set of tests.<br />But there are often interactions between features, and functional testing tends to miss<br />them. For example, you might never notice that the sequence of operations open a<br />document, edit the document, print the whole document, edit<br />one page, print that page doesn’t work. But customers surely will, because<br />they don’t use products functionally. They have a task orientation. To find the bugs that<br />customers see - that are important to customers - you need to write tests that cross<br />functional areas by mimicking typical user tasks. This type of testing is called scenario<br />testing, task-based testing, or use-case testing.<br />A bias toward functional testing also underemphasizes configuration testing.<br />Configuration testing checks how the product works on different hardware and when<br />combined with different third party software. There are typically many combinations that<br />need to be tried, requiring expensive labs stocked with hardware and much time spent<br />setting up tests, so configuration testing isn’t cheap. But, it’s worth it when you discover<br />that your standard in-house platform which “entirely conforms to industry standards”<br />actually behaves differently from most of the machines on the market.<br />Both configuration testing and scenario testing test global, cross-functional aspects of the<br />product. Another type of testing that spans the product checks how it behaves under<br />stress (a large number of transactions, very large transactions, a large number of<br />simultaneous transactions). Putting stress and load testing off to the last<br />minute is common, but it leaves you little time to do anything substantive when you<br />discover your product doesn’t scale up to more than 12 users.3<br />3 Failure to apply particular types of testing is another reason why developers complain that testers aren’t finding the<br />important bugs. Developers of an operating system could be spending all their time debugging crashes of their<br />private machines, crashes due to networking bugs under normal load. The testers are doing straight “functional<br />Classic Testing Mistakes<br />6<br />Two related mistakes are not testing the documentation and not testing<br />installation procedures. Testing the documentation means checking that all the<br />procedures and examples in the documentation work. Testing installation procedures is a<br />good way to avoid making a bad first impression.<br />How about avoiding testing altogether?<br />At a conference last year, I met (separately) two depressed testers who told me their<br />management was of the opinion that the World Wide Web could reduce testing costs.<br />“Look at [wildly successful internet company]. They distribute betas over the network<br />and get their customers to do the testing for free!” The Windows 95 beta program is also<br />cited in similar ways.<br />Beware of an overreliance on beta testing. Beta testing seems to give you test<br />cases representative of customer use - because the test cases are customer use. Also, bugs<br />reported by customers are by definition those important to customers. However, there are<br />several problems:<br />1. The customers probably aren’t that representative. In the common high-tech<br />marketing model4, beta users, especially those of the “put it on your web site and they<br />will download” sort, are the early adopters, those who like to tinker with new<br />technologies. They are not the pragmatists, those who want to wait until the<br />technology is proven and safe to adopt. The usage patterns of these two groups are<br />different, as are the kinds of bugs they consider important. In particular, early<br />adopters have a high tolerance for bugs with workarounds and for bugs that “just go<br />away” when they reload the program. Pragmatists, who are much less tolerant, make<br />up the large majority of the market.<br />2. Even of those beta users who actually use the product, most will not use it seriously.<br />They will give it the equivalent of a quick test drive, rather than taking the whole<br />family for a two week vacation. As any car buyer knows, the test drive often leaves<br />unpleasant features undiscovered.<br />3. Beta users - just like customers in general - don’t report usability problems unless<br />prompted. They simply silently decide they won’t buy the final version.<br />4. Beta users - just like customers in general - often won’t report a bug, especially if<br />they’re not sure what they did to cause it, or if they think it is obvious enough that<br />someone else must have already reported it.<br />5. When beta users report a bug, the bug report is often unusable. It costs much more<br />time and effort to handle a user bug report than one generated internally.<br />tests” on isolated machines, so they don’t find bugs. The bugs they do find are not more serious than crashes<br />(usually defined as highest severity for operating systems), and they’re probably less.<br />4 See [Moore91] or [Moore95]. I briefly describe this model in a review of Moore’s books, available through Pure<br />Atria’s book review pages (http://www.pureatria.com).<br />Classic Testing Mistakes<br />7<br />Beta programs can be useful, but they require careful planning and monitoring if they are<br />to do more than give a warm fuzzy feeling that at least some customers have used the<br />product before it’s inflicted on all of them. See [Kaner93] for a brief description.<br />The one situation in which beta programs are unequivocally useful is in configuration<br />testing. For any possible screwy configuration, you can find a beta user who has it. You<br />can do much more configuration testing than would be possible in an in-house lab (or even<br />perhaps an outsourced testing agency). Beta users won’t do as thorough a job as a trained<br />tester, but they’ll catch gross errors of the “BackupBuster doesn’t work on this brand of<br />‘compatible’ floppy tape drive” sort.<br />Beta programs are also useful for building word of mouth advertising, getting “first<br />glance” reviews in magazines, supporting third-party vendors who will build their product<br />on top of yours, and so on. Those are properly marketing activities, not testing.<br />Planning and replanning in support of the role of testing<br />Each of the types of testing described above, including functional testing, reduces<br />uncertainty about a particular aspect of the product. When done, you have confidence<br />that some functional areas are less buggy, others more. The product either usually works<br />on new configurations, or it doesn’t.5<br />There’s a natural tendency toward finishing one testing task before moving on<br />to the next, but that may lead you to discover bad news too late. It’s better to know<br />something about all areas than everything about a few. When you’ve discovered where the<br />problem areas lie, you can test them to greater depth as a way of helping the developers<br />raise the quality by finding the important bugs.6<br />Strictly, I’ve been over-simplistic in describing testing’s role as reducing uncertainty. It<br />would be better to say “risk-weighted uncertainty”. Some areas in the product are riskier<br />than others, perhaps because they’re used by more customers or because failures in that<br />area would be particularly severe. Riskier areas require more certainty. Failing to<br />correctly identify risky areas is a common mistake, and it leads to misallocated<br />testing effort. There are two sound approaches for identifying risky areas:<br />1. Ask everyone you can for their opinion. Gather data from developers, marketers,<br />technical writers, customer support people, and whatever customer representatives<br />5 I use “confidence” in its colloquial rather than its statistical sense. Conventional testing that searches specifically<br />for bugs does not allow you to make statements like “this product will run on 95±5% of Wintel machines”. In that<br />sense, it’s weaker than statistical or reliability testing, which uses statistical profiles of the customer environment<br />to both find bugs and make failure estimates. (See [Dyer92], [Lyu96], and [Musa87].) Statistical testing can be<br />difficult to apply, so I concentrate on a search for bugs as the way to get a usable estimate. A lack of statistical<br />validity doesn’t mean that bug numbers give you nothing but “warm and fuzzy (or cold and clammy) feelings”.<br />Given a modestly stable testing process, development process, and product line, bug numbers lead to distinctly<br />better decisions, even if they don’t come with p-values or statistical confidence intervals.<br />6 It’s expensive to test quality into the product, but it may be the only alternative. Code redesigns and rewrites may<br />not be an option.<br />Classic Testing Mistakes<br />8<br />you can find. See [Kaner96a] for a good description of this kind of collaborative test<br />planning.<br />2. Use historical data. Analyzing bug reports from past products (especially those from<br />customers, but also internal bug reports) helps tell you what areas to explore in this<br />project.<br />“So, winter’s early this year. We’re still going to invade Russia.”<br />Good testers are systematic and organized, yet they are exposed to all the chaos and twists<br />and turns and changes of plan typical of a software development project. In fact, the<br />chaos is magnified by the time it gets to testers, because of their position at the end of the<br />food chain and typically low status.7 One unfortunate reaction is sticking stubbornly<br />to the test plan. Emotionally, this can be very satisfying: “They can flail around<br />however they like, but I’m going to hunker down and do my job.” The problem is that<br />your job is not to write tests. It’s to find the bugs that matter in the areas of greatest<br />uncertainty and risk, and ignoring changes in the reality of the product and project can<br />mean that your testing becomes irrelevant.8<br />That’s not to say that testers should jump to readjust all their plans whenever there’s a<br />shift in the wind, but my experience is that more testers let their plans fossilize than<br />overreact to project change.<br />Theme Three: Personnel Issues<br />Fresh out of college, I got my first job as a tester. I had been hired as a developer, and<br />knew nothing about testing, but, as they said, “we don’t know enough about you yet, so<br />we’ll put you somewhere where you can’t do too much damage”. In due course, I<br />“graduated” to development.<br />Using testing as a transitional job for new programmers is one of the two<br />classic mistaken ways to staff a testing organization. It has some virtues. One is that you<br />really can keep bad hires away from the code. A bozo in testing is often less dangerous<br />than a bozo in development. Another is that the developer may learn something about<br />testing that will be useful later. (In my case, it founded a career.) And it’s a way for the<br />new hire to learn the product while still doing some useful work.<br />The advantages are outweighed by the disadvantage: the new hire can’t wait to get out of<br />testing. That’s hardly conducive to good work. You could argue that the testers have to<br />do good work to get “paroled”. Unfortunately, because people tend to be as impressed by<br />effort as by results, vigorous activity - especially activity that establishes credentials as a<br />7 How many proposed changes to a product are rejected because of their effect on the testing schedule? How often<br />does the effect on the testing team even cross a developer’s or marketer’s mind?<br />8 This is yet another reason why developers complain that testers aren’t finding the important bugs. Because of<br />market pressure, the project has shifted to an Internet focus, but the testers are still using and testing the old<br />“legacy” interface instead of the now critically important web browser interface.<br />Classic Testing Mistakes<br />9<br />programmer - becomes the way out. As a result, the fledgling tester does things like<br />become the expert in the local programmable editor or complicated freeware tool. That,<br />at least, is a potentially useful role, though it has nothing to do with testing. More<br />dangerous is vigorous but misdirected testing activity; namely, test automation. (See the<br />last theme.)<br />Even if novice testers were well guided, having so much of the testing staff be transients<br />could only work if testing is a shallow algorithmic discipline. In fact, good testers require<br />deep knowledge and experience.<br />The second classic mistake is recruiting testers from the ranks of failed<br />programmers. There are plenty of good testers who are not good programmers, but a<br />bad programmer likely has some work habits that will make him a bad tester, too. For<br />example, someone who makes lots of bugs because he’s inattentive to detail will miss lots<br />of bugs for the same reason.<br />So how should the testing team be staffed? If you’re willing to be part of the training<br />department, go ahead and accept new programmer hires.9 Accept as applicants<br />programmers who you suspect are rejects (some fraction of them really have gotten tired<br />of programming and want a change) but interview them as you would an outside hire.<br />When interviewing, concentrate less on formal qualifications than on intelligence and the<br />character of the candidate’s thought. A good tester has these qualities:10<br />· methodical and systematic.<br />· tactful and diplomatic (but firm when necessary).<br />· skeptical, especially about assumptions, and wants to see concrete evidence.<br />· able to notice and pursue odd details.<br />· good written and verbal skills (for explaining bugs clearly and concisely).<br />· a knack for anticipating what others are likely to misunderstand. (This is useful both in<br />finding bugs and writing bug reports.)<br />· a willingness to get one’s hands dirty, to experiment, to try something to see what<br />happens.<br />Be especially careful to avoid the trap of testers who are not domain experts.<br />Too often, the tester of an accounting package knows little about accounting.<br />Consequently, she finds bugs that are unimportant to accountants and misses ones that<br />are. Further, she writes bug reports that make serious bugs seem irrelevant. A<br />programmer may not see past the unrepresentative test to the underlying important<br />problem. (See the discussion of reporting bugs in the next theme.)<br />Domain experts may be hard to find. Try to find a few. And hire testers who are quick<br />studies and are good at understanding other people’s work patterns.<br />9 Some organizations rotate all developers through testing. Well, all developers except those with enough clout to<br />refuse. And sometimes people not in great demand don’t seem ever to rotate out. I’ve seen this approach work,<br />but it’s fragile.<br />10 See also the list in [Kaner93], chapter 15.<br />Classic Testing Mistakes<br />10<br />Two groups of people are readily at hand and often have those skills. But testing teams<br />often do not seek out applicants from the customer service staff or the<br />technical writing staff. The people who field email or phone problem reports<br />develop, if they’re good, a sense of what matters to the customer (at least to the vocal<br />customer) and the best are very quick on their mental feet.<br />Like testers, technical writers often also lack detailed domain knowledge. However,<br />they’re in the business of translating a product’s behavior into terms that make sense to a<br />user. Good technical writers develop a sense of what’s important, what’s confusing, and<br />so on. Those areas that are hard to explain are often fruitful sources of bugs. (What<br />confuses the user often also confuses the programmer.)<br />One reason these two groups are not tapped is an insistence that testers be able to<br />program. Programming skill brings with it certain advantages in bug hunting. A<br />programmer is more likely to find the number 2,147,483,648 interesting than an<br />accountant will. (It overflows a signed integer on most machines.) But such tricks of the<br />trade are easily learned by competent non-programmers, so not having them is a weak<br />reason for turning someone down.<br />If you hire according to these guidelines, you will avoid a testing team that lacks<br />diversity. All of the members will lack some skills, but the team as a whole will have<br />them all. Over time, in a team with mutual respect, the non-programmers will pick up<br />essential tidbits of programming knowledge, the programmers will pick up domain<br />knowledge, and the people with a writing background will teach the others how to<br />deconstruct documents.<br />All testers - but non-programmers especially - will be hampered by a physical<br />separation between developers and testers. A smooth working relationship<br />between developers and testers is essential to efficient testing. Too much valuable<br />information is unwritten; the tester finds it by talking to developers. Developers and<br />testers must often work together in debugging; that’s much harder to do remotely.<br />Developers often dismiss bug reports too readily, but it’s harder to do that to a tester you<br />eat lunch with.<br />Remote testing can be made to work - I’ve done it - but you have to be careful. Budget<br />money for frequent working visits, and pay attention to interpersonal issues.<br />Some believe that programmers can’t test their own code. On the face of it, this<br />is false: programmers test their code all the time, and they do find bugs. Just not enough<br />of them, which is why we need independent testers.<br />But if independent testers are testing, and programmers are testing (and inspecting), isn’t<br />there a potential duplication of effort? And isn’t that wasteful? I think the answer is yes.<br />Ideally, programmers would concentrate on the types of bugs they can find adequately<br />well, and independent testers would concentrate on the rest.<br />Classic Testing Mistakes<br />11<br />The bugs programmers can find well are those where their code does not do what they<br />intended. For example, a reasonably trained, reasonably motivated programmer can do a<br />perfectly fine job finding boundary conditions and checking whether each known<br />equivalence class is handled. What programmers do poorly is discovering overlooked<br />special cases (especially error cases), bugs due to the interaction of their code with other<br />people’s code (including system-wide properties like deadlocks and performance<br />problems), and usability problems.<br />Crudely put, good programmers do functional testing, and testers should do everything<br />else.11 Recall that I earlier claimed an over-concentration on functional testing is a classic<br />mistake. Decent programmer testing magnifies the damage it does.<br />Of course, decent programmer testing is relatively rare, because programmers are<br />neither trained nor motivated to test. This is changing, gradually, as companies<br />realize it’s cheaper to have bugs found and fixed quickly by one person, instead of more<br />slowly by two. Until then, testers must do both the testing that programmers can do and<br />the testing only testers can do, but must take care not to let functional testing squeeze out<br />the rest.<br />Theme Four: The Tester At Work<br />When testing, you must decide how to exercise the program, then do it. The doing is ever<br />so much more interesting than the deciding. A tester’s itch to start breaking the program is<br />as strong as a programmer’s itch to start writing code - and it has the same effect: design<br />work is skimped, and quality suffers. Paying more attention to running tests<br />than to designing them is a classic mistake. A tester who is not systematic, who does<br />not spend time laying out the possibilities in advance, will overlook special cases. They<br />may be the same subtle ones that the programmers overlooked.<br />Concentration on execution also results in unreviewed test designs. Just like<br />programmers, testers can benefit from a second pair of eyes. Reviews of test designs<br />needn’t be as elaborate as product design reviews, but a short check of the testing<br />approach and the resulting tests can find significant omissions at low cost.<br />What is a test design?<br />A test design should contain a description of the setup (including machine configuration<br />for a configuration test), inputs given to the product, and a description of expected results.<br />One common mistake is being too specific about test inputs and procedures.<br />Let’s assume manual test implementation for the moment. A related argument for<br />automated tests will be discussed in the next section. Suppose you’re testing a banking<br />application. Here are two possible test designs:<br />11 Independent testers will also provide a “safety net” for programmer testing. A certain amount of functional testing<br />might be planned, or it might be a side effect of the other types of testing being done.<br />Classic Testing Mistakes<br />12<br />Design 1<br />Setup: initialize the balance in account 12 with $100.<br />Procedure:<br />Start the program.<br />Type 12 in the Account window.<br />Press OK.<br />Click on the ‘Withdraw’ toolbar button.<br />In the withdraw popup dialog, click on the ‘all’ button.<br />Press OK.<br />Expect to see a confirmation popup that says “You are about to withdraw all the<br />money from this account. Continue?”<br />Press OK.<br />Expect to see a 0 balance in the account window.<br />Separately query the database to check that the zero balance has been posted.<br />Exit the program with File->Exit.<br />Design 2<br />Setup: initialize the balance with a positive value.<br />Procedure:<br />Start the program on that account.<br />Withdraw all the money from the account using the ‘all’ button.<br />It’s an error if the transaction happens without a confirmation popup.<br />Immediately thereafter:<br />- Expect a $0 balance to be displayed.<br />- Independently query the database to check that the zero balance has been posted.<br />The first design style has these advantages:<br />· The test will always be run the same way. You are more likely to be able to reproduce<br />the bug. So will the programmer.<br />· It details all the important expected results to check. Imprecise expected results make<br />failures harder to notice. For example, a tester using the second style would find it<br />easier to overlook a spelling error in the confirmation popup, or even that it was the<br />wrong popup.<br />· Unlike the second style, you always know exactly what you’ve tested. In the second<br />style, you couldn’t be sure that you’d ever gotten to the Withdraw dialog via the<br />toolbar. Maybe the menu was always used. Maybe the toolbar button doesn’t work at<br />all!<br />· By spelling out all inputs, the first style prevents testers from carelessly overusing<br />simple values. For example, a tester might always test accounts with $100, rather than<br />using a variety of small and large balances. (Either style should include explicit tests<br />for boundary and special values.)<br />However, there are also some disadvantages:<br />· The first style is more expensive to create.<br />Classic Testing Mistakes<br />13<br />· The inevitable minor changes to the user interface will break it, so it’s more expensive<br />to maintain.<br />· Because each run of the test is exactly the same, there’s no chance that a variation in<br />procedure will stumble across a bug.<br />· It’s hard for testers to follow a procedure exactly. When one makes a mistake -<br />pushes the wrong button, for example - will she really start over?<br />On balance, I believe the negatives often outweigh the positives, provided there is a<br />separate testing task to check that all the menu items and toolbar buttons are hooked up.<br />(Not only is a separate task more efficient, it’s less error-prone. You’re less likely to<br />accidentally omit some buttons.)<br />I do not mean to suggest that test cases should not be rigorous, only that they should be<br />no more rigorous than is justified, and that we testers sometimes error on the side of<br />uneconomical detail.<br />Detail in the expected results is less problematic than in the test procedure, but too much<br />detail can focus the tester’s attention too much on checking against the script he’s<br />following. That might encourage another classic mistake: not noticing and<br />exploring “irrelevant” oddities. Good testers are masters at noticing “something<br />funny” and acting on it. Perhaps there’s a brief flicker in some toolbar button which, when<br />investigated, reveals a crash. Perhaps an operation takes an oddly long time, which<br />suggests to the attentive tester that increasing the size of an “irrelevant” dataset might<br />cause the program to slow to a crawl. Good testing is a combination of following a script<br />and using it as a jumping-off point for an exploration of the product.<br />An important special case of overlooking bugs is checking that the product does<br />what it’s supposed to do, but not that it doesn’t do what it isn’t supposed<br />to do. As an example, suppose you have a program that updates a health care service’s<br />database of family records. A test adds a second child to Dawn Marick’s record. Almost<br />all testers would check that, after the update, Dawn now has two children. Some testers -<br />those who are clever, experienced, or subject matter experts - would check that Dawn<br />Marick’s spouse, Brian Marick, also now has two children. Relatively few testers would<br />check that no one else in the database has had a child added. They would miss a bug<br />where the programmer over-generalized and assumed that all “family information” updates<br />should be applied both to a patient and to all members of her family, giving Paul Marick<br />(aged 2) a child.<br />Ideally, every test should check that all data that should be modified has been modified<br />and that all other data has been unchanged. With forethought, that can be built into<br />automated tests. Complete checking may be impractical for manual tests, but occasional<br />quick scans for data that might be corrupted can be valuable.<br />Testing should not be isolated work<br />Here’s another version of the test we’ve been discussing:<br />Classic Testing Mistakes<br />14<br />Design 3<br />Withdraw all with confirmation and normal check for 0.<br />That means the same thing as Design 2 - but only to the original author. Test suites<br />that are understandable only by their owners are ubiquitous. They cause many<br />problems when their owners leave the company; sometimes many month’s worth of work<br />has to be thrown out.<br />I should note that designs as detailed as Designs 1 or 2 often suffer a similar problem.<br />Although they can be run by anyone, not everyone can update them when the product’s<br />interface changes. Because the tests do not list their purposes explicitly, updates can<br />easily make them test a little less than they used to. (Consider, for example, a suite of<br />tests in the Design 1 style: how hard will it be to make sure that all the user interface<br />controls are touched in the revised tests? Will the tester even know that’s a goal of the<br />suite?) Over time, this leads to what I call “test suite decay,” in which a suite full of tests<br />runs but no longer tests much of anything at all.12<br />Another classic mistake involves the boundary between the tester and programmer. Some<br />products are mostly user interface; everything they do is visible on the screen. Other<br />products are mostly internals; the user interface is a “thin pipe” that shows little of what<br />happens inside. The problem is that testing has to use that thin pipe to discover failures.<br />What if complicated internal processing produces only a “yes or no” answer? Any given<br />test case could trigger many internal faults that, through sheer bad luck, don’t produce the<br />wrong answer.13<br />In such situations, testers sometimes rely solely on programmer (“unit”) testing. In cases<br />where that’s not enough, testing only through the user-visible interface is a<br />mistake. It is far better to get the programmers to add “testability hooks” or “testpoints”<br />that reveal selected internal state. In essence, they convert a product like this:<br />Guts of the Product<br />User Interface<br />to one like this:<br />12 The purpose doesn’t need to be listed with the test. It may be better to have a central document describing the<br />purposes of a group of tests, perhaps in tabular form. Of course, then you have to keep that document up to date.<br />13 This is an example of the formal notion of “testability”. See, [Friedman95] or [Voas91] for an academic treatment.<br />Classic Testing Mistakes<br />15<br />Guts of the Product<br />User Interface<br />Testing<br />Interface<br />It is often difficult to convince programmers to add test support code to the product.<br />(Actual quote: “I don’t want to clutter up my code with testing crud.”) Persevere, start<br />modestly, and take advantage of these facts:<br />1. The test support code is often a simple extension of the debugging support code<br />programmers write anyway.14<br />2. A small amount of test support code often goes a long way.<br />A common objection to this approach is that the test support code must be compiled out<br />of the final product (to avoid slowing it down). If so, tests that use the testing interface<br />“aren’t testing what we ship”. It is true that some of the tests won’t run on the final<br />version, so you may miss bugs. But, without testability code, you’ll miss bugs that don’t<br />reveal themselves through the user interface. It’s a risk tradeoff, and I believe that adding<br />test support code usually wins. See [Marick95], chapter 13, for more details.<br />In one case, there’s an alternative to having the programmer add code to the product:<br />have a tool do it. Commercial tools like Purify, Boundschecker, and Sentinel<br />automatically add code that checks for certain classes of failures (such as memory leaks).15<br />They provide a narrow, specialized testing interface. For marketing reasons, these tools<br />are sold as programmer debugging tools, but they’re equally test support tools, and I’m<br />amazed that testing groups don’t use them as a matter of course.<br />Testability problems are exacerbated in distributed systems like conventional client/server<br />systems, multi-tiered client/server systems, Java applets that provide smart front-ends to<br />web sites, and so forth. Too often, tests of such systems amount to shallow tests of the<br />user interface component because that’s the only component that the tester can easily<br />control.<br />14 For example, the Java language encourages programmers to use the toString method to make internal objects<br />printable. A programmer doesn’t have to use it, since the debugger lets her see all the values in any object, but it<br />simplifies debugging for objects she’ll look at often. All testers need (roughly) is a way to call toString from<br />some external interface.<br />15 For a list of such commercial tools, see http://www.stlabs.com/marick/faqs/tools.htm. Follow the link to “Other<br />Test Implementation Tools”.<br />Classic Testing Mistakes<br />16<br />Finding failures is only the start<br />It’s not enough to find a failure; you must also report it. Unfortunately, poor bug<br />reporting is a classic mistake. Tester bug reports suffer from five major problems:<br />1. They do not describe how to reproduce the bug. Either no procedure is given, or the<br />given procedure doesn’t work. Either case will likely get the bug report shelved.<br />2. They don’t explain what went wrong. At what point in the procedure does the bug<br />occur? What should happen there? What actually happened?<br />3. They are not persuasive about the priority of the bug. Your job is to have the<br />seriousness of the bug accurately assessed. There’s a natural tendency for<br />programmers and managers to rate bugs as less serious than they are. If you believe a<br />bug is serious, explain why a customer would view it the way you do.16 If you found<br />the bug with an odd case, take the time to reproduce it with a more obviously common<br />or compelling case.<br />4. They do not help the programmer in debugging. This is a simple cost/benefit tradeoff.<br />A small amount of time spent simplifying the procedure for reproducing the bug or<br />exploring the various ways it could occur may save a great deal of programmer time.<br />5. They are insulting, so they poison the relationship between developers and testers.<br />[Kaner93] has an excellent chapter (5) on how to write bug reports. Read it.<br />Not all bug reports come from testers. Some come from customers. When that happens,<br />it’s common for a tester to write a regression test that reproduces the bug in the broken<br />version of the product. When the bug is fixed, that test is used to check that it was fixed<br />correctly.<br />However, adding only regression tests is not enough. A customer bug report<br />suggests two things:<br />1. That area of the product is buggy. It’s well known that bugs tend to cluster.17<br />2. That area of the product was inadequately tested. Otherwise, why did the bug<br />originally escape testing?<br />An appropriate response to several customer bug reports in an area is to schedule more<br />thorough testing for that area. Begin by examining the current tests (if they’re<br />understandable) to determine their systematic weaknesses.<br />Finally, every bug report is a gift from a customer that tells you how to test better in the<br />future. A common mistake is failing to take notes for the next testing effort.<br />16 Cem Kaner suggests something even better: have the person whose budget will be directly affected explain why<br />the bug is important. The customer service manager will speak more authoritatively about those installation bugs<br />than you could.<br />17 That’s true even if the bug report is due to a customer misunderstanding. Perhaps this area of the product is just<br />too hard to understand.<br />Classic Testing Mistakes<br />17<br />The next product will be somewhat like this one, the bugs will be somewhat like these, and<br />the tests useful in finding those bugs will also be somewhat like the ones you just ran.<br />Mental notes are easy to forget, and they’re hard to hand to a new tester. Writing is a<br />wonderful human invention: use it. Both [Kaner93] and [Marick95] describe formats for<br />archiving test information, and both contain general-purpose examples.<br />Theme Five: Technology Run Rampant<br />Test automation is based on a simple economic proposition:<br />· If a manual test costs $X to run the first time, it will cost just about $X to run each<br />time thereafter, whereas:<br />· If an automated test costs $Y to create, it will cost almost nothing to run from then<br />on.<br />$Y is bigger than $X. I’ve heard estimates ranging from 3 to 30 times as big, with the<br />most commonly cited number seeming to be 10. Suppose 10 is correct for your application<br />and your automation tools. Then you should automate any test that will be run more than<br />10 times.<br />A classic mistake is to ignore these economics, attempting to automate all tests,<br />even those that won’t be run often enough to justify it. What tests clearly justify<br />automation?<br />· Stress or load tests may be impossible to implement manually. Would you have a<br />tester execute and check a function 1000 times? Are you going to sit 100 people down<br />at 100 terminals?<br />· Nightly builds are becoming increasingly common. (See [McConnell96] or<br />[Cusumano95] for descriptions of the procedure.) If you build the product nightly,<br />you must have an automated “smoke test suite”. Smoke tests are those that are run<br />after every build to check for grievous errors.<br />· Configuration tests may be run on dozens of configurations.<br />The other kinds of tests are less clear-cut. Think hard about whether you’d rather have<br />automated tests that are run often or ten times as many manual tests, each run once.<br />Beware of irrational, emotional reasons for automating, such as testers who find<br />programming automated tests more fun, a perception that automated tests will lead to<br />higher status (everything else is “monkey testing”), or a fear of not rerunning a test that<br />would have found a bug (thus leading you to automate it, leaving you without enough<br />time to write a test that would have found a different bug).<br />You will likely end up in a compromise position, where you have:<br />1. a set of automated tests that are run often.<br />2. a well-documented set of manual tests. Subsets of these can be rerun as necessary.<br />For example, when a critical area of the system has been extensively changed, you<br />Classic Testing Mistakes<br />18<br />might rerun its manual tests. You might run different samples of this suite after each<br />major build. 18<br />3. a set of undocumented tests that were run once (including exploratory “bug bash”<br />tests).<br />Beware of expecting to rerun all manual tests. You will become bogged down<br />rerunning tests with low bug-finding value, leaving yourself no time to create new tests.<br />You will waste time documenting tests that don’t need to be documented.<br />You could automate more tests if you could lower the cost of creating them. That’s the<br />promise of using GUI capture/replay tools to reduce test creation cost. The<br />notion is that you simply execute a manual test, and the tool records what you do. When<br />you manually check the correctness of a value, the tool remembers that correct value.<br />You can then later play back the recording, and the tool will check whether all checked<br />values are the same as the remembered values.<br />There are two variants of such tools. What I call the first generation tools capture raw<br />mouse movements or keystrokes and take snapshots of the pixels on the screen. The<br />second generation tools (often called “object oriented”) reach into the program and<br />manipulate underlying data structures (widgets or controls).19<br />First generation tools produce unmaintainable tests. Whenever the screen layout changes<br />in the slightest way, the tests break. Mouse clicks are delivered to the wrong place, and<br />snapshots fail in irrelevant ways that nevertheless have to be checked. Because screen<br />layout changes are common, the constant manual updating of tests becomes insupportable.<br />Second generation tools are applicable only to tests where the underlying data structures<br />are useful. For example, they rarely apply to a photograph editing tool, where you need to<br />look at an actual image - at the actual bitmap. They also tend not to work with custom<br />controls. Heavy users of capture/replay tools seem to spend an inordinate amount of time<br />trying to get the tool to deal with the special features of their program - which raises the<br />cost of test automation.<br />Second generation tools do not guarantee maintainability either. Suppose a radio button is<br />changed to a pulldown list. All of the tests that use the old controls will now be broken.<br />GUI interface changes are of course common, especially between releases. Consider<br />carefully whether an automated test that must be recaptured after GUI changes is worth<br />having. Keep in mind that it can be hard to figure out what a captured test is attempting<br />to accomplish unless it is separately documented.<br />18 An additional benefit of automated tests is that they can be run faster than manual tests. That allows you to reduce<br />the time between completion of a build and completion of its testing. That can be especially important in the final<br />builds, if only to avoid pressure from executives itching to ship the product. You’re trading fewer tests for faster<br />time to market. That can be a reasonable tradeoff, but it doesn’t affect the core of my argument, which is that not<br />all tests should be automated.<br />19 These are, in effect, another example of tools that add test support code to the program.<br />Classic Testing Mistakes<br />19<br />As a rule of thumb, it’s dangerous to assume that an automated test will pay for itself this<br />release, so your test must be able to survive a reasonable level of GUI change. I believe<br />that capture/replay tests, of either generation, are rarely robust enough.<br />An alternative approach to capture/replay is scripting tests. (Most GUI capture/replay<br />tools also allow scripting.) Some member of the testing team writes a “test API”<br />(application programmer interface) that lets other members of the team express their tests<br />in less GUI-dependent terms. Whereas a captured test might look like this:<br />text $main.accountField “12”<br />click $main.OK<br />menu $operations<br />menu $withdraw<br />click $withdrawDialog.all<br />...<br />a script might look like this:<br />select-account 12<br />withdraw all<br />...<br />The script commands are subroutines that perform the appropriate mouse clicks and key<br />presses. If the API is well-designed, most GUI changes will require changes only to the<br />implementation of functions like withdraw, not to all the tests that use them.20 Please<br />note that well-designed test APIs are as hard to write as any other good API. That is,<br />they’re hard, and you shouldn’t expect to get it right the first time.<br />In a variant of this approach, the tests are data-driven. The tester provides a table<br />describing key values. Some tool reads the table and converts it to the appropriate mouse<br />clicks. The table is even less vulnerable to GUI changes because the sequence of<br />operations has been abstracted away. It’s also likely to be more understandable, especially<br />to domain experts who are not programmers. See [Pettichord96] for an example of datadriven<br />automated testing.<br />Note that these more abstract tests (whether scripted or data-driven) do not necessarily<br />test the user interface thoroughly. If the Withdraw dialog can be reached via several<br />routes (toolbar, menu item, hotkey), you don’t know whether each route has been tried.<br />You need a separate (most likely manual) effort to ensure that all the GUI components are<br />connected correctly.<br />Whatever approach you take, don’t fall into the trap of expecting regression tests to<br />find a high proportion of new bugs. Regression tests discover that new or<br />changed code breaks what used to work. While that happens more often than any of us<br />20 The “Joe Gittano” stories and essays on my web page, http://www.stlabs.com/marick/root.htm, go into this<br />approach in more detail.<br />Classic Testing Mistakes<br />20<br />would like, most bugs are in the product’s new or intentionally changed behavior. Those<br />bugs have to be caught by new tests.<br />I © code coverage<br />GUI capture/replay testing is appealing because it’s a quick fix for a difficult problem.<br />Another class of tool has the same kind of attraction.<br />The difficult problem is that it’s so hard to know if you’re doing a good job testing. You<br />only really find out once the product has shipped. Understandably, this makes managers<br />uncomfortable. Sometimes you find them embracing code coverage with the<br />devotion that only simple numbers can inspire. Testers sometimes also<br />become enamored of coverage, though their romance tends to be less fervent and ends<br />sooner.<br />What is code coverage? It is any of a number of measures of how thoroughly code is<br />exercised. One common measure counts how many statements have been executed by any<br />test. The appeal of such coverage is twofold:<br />1. If you’ve never exercised a line of code, you surely can’t have found any of its bugs.<br />So you should design tests to exercise every line of code.<br />2. Test suites are often too big, so you should throw out any test that doesn’t add value.<br />A test that adds no new coverage adds no value.<br />Only the first sentences in (1) and (2) are true. I’ll illustrate with this picture, where the<br />irregular splotches indicate bugs:<br />Tests needed<br />to find bugs<br />Tests<br />Needed<br />For<br />Coverage<br />If you write only the tests needed to satisfy coverage, you’ll find bugs. You’re guaranteed<br />to find the code that always fails, no matter how it’s executed. But most bugs depend on<br />how a line of code is executed. For example, code with an off-by-one error fails only<br />when you exercise a boundary. Code with a divide-by-zero error fails only if you divide<br />by zero. Coverage-adequate tests will find some of these bugs, by sheer dumb luck, but<br />not enough of them. To find enough bugs, you have to write additional tests that<br />“redundantly” execute the code.<br />Classic Testing Mistakes<br />21<br />For the same reason, removing tests from a regression test suite just because<br />they don’t add coverage is dangerous. The point is not to cover the code; it’s to have<br />tests that can discover enough of the bugs that are likely to be caused when the code is<br />changed. Unless the tests are ineptly designed, removing tests will just remove power. If<br />they are ineptly designed, using coverage converts a big and lousy test suite to a small and<br />lousy test suite. That’s progress, I suppose, but it’s addressing the wrong problem.21<br />A grave danger of code coverage is that it is concrete, objective, and easy to measure.<br />Many managers today are using coverage as a performance goal for testers.<br />Unfortunately, a cardinal rule of management applies here: “Tell me how a person is<br />evaluated, and I’ll tell you how he behaves.” If a person is evaluated by how much<br />coverage is achieved in a given time (or in how little time it takes to reach a particular<br />coverage goal), that person will tend to write tests to achieve high coverage in the fastest<br />way possible. Unfortunately, that means shortchanging careful test design that targets<br />bugs, and it certainly means avoiding in-depth, repetitive testing of “already covered”<br />code.22<br />Using coverage as a test design technique works only when the testers are both designing<br />poor tests and testing redundantly. They’d be better off at least targeting their poor tests<br />at new areas of code. In more normal situations, coverage as a guide to design only<br />decreases the value of the tests or puts testers under unproductive pressure to meet<br />unhelpful goals.<br />Coverage does play a role in testing, not as a guide to test design, but as a rough<br />evaluation of it. After you’ve run your tests, ask what their coverage is. If certain areas of<br />the code have no or low coverage, you’re sure to have tested them shallowly. If that<br />wasn’t intentional, you should improve the tests by rethinking their design. Coverage has<br />told you where your tests are weak, but it’s up to you to understand how.<br />You might not entirely ignore coverage. You might glance at the uncovered lines of code<br />(possibly assisted by the programmer) to discover the kinds of tests you omitted. For<br />example, you might scan the code to determine that you undertested a dialog box’s error<br />handling. Having done that, you step back and think of all the user errors the dialog box<br />should handle, not how to provoke the error checks on line 343, 354, and 399. By<br />rethinking design, you’ll not only execute those lines, you might also discover that several<br />other error checks are entirely missing. (Coverage can’t tell you how well you would<br />have exercised needed code that was left out of the program.)<br />21 Not all regression test suites have the same goals. Smoke tests are intended to run fast and find grievous, obvious<br />errors. A coverage-minimized test suite is entirely appropriate.<br />22 In pathological cases, you’d never bother with user scenario testing, load testing, or configuration testing, none of<br />which add much, if any, coverage to functional testing.<br />Classic Testing Mistakes<br />22<br />There are types of coverage that point more directly to design mistakes than statement<br />coverage does (branch coverage, for example).23 However, none - and not all of them put<br />together - are so accurate that they can be used as test design techniques.<br />One final note: Romances with coverage don’t seem to end with the former devotee<br />wanting to be “just good friends”. When, at the end of a year’s use of coverage, it has not<br />solved the testing problem, I find testing groups abandoning coverage entirely.<br />That’s a shame. When I test, I spend somewhat less than 5% of my time looking at<br />coverage results, rethinking my test design, and writing some new tests to correct my<br />mistakes. It’s time well spent.<br />Acknowledgements<br />My discussions about testing with Cem Kaner have always been illuminating. The<br />LAWST (Los Altos Workshop on Software Testing) participants said many interesting<br />things about automated GUI testing. The LAWST participants were Chris Agruss, Tom<br />Arnold, James Bach, Jim Brooks, Doug Hoffman, Cem Kaner, Brian Lawrence, Tom<br />Lindemuth, Noel Nyman, Brett Pettichord, Drew Pritsker, and Melora Svoboda. Paul<br />Czyzewski, Peggy Fouts, Cem Kaner, Eric Petersen, Joe Strazzere, Melora Svoboda, and<br />Stephanie Young read an earlier draft.<br />References<br />[Cusumano95]<br />M. Cusumano and R. Selby, Microsoft Secrets, Free Press, 1995.<br />[Dyer92]<br />Michael Dyer, The Cleanroom Approach to Quality Software Development,<br />Wiley, 1992.<br />[Friedman95]<br />M. Friedman and J. Voas, Software Assessment: Reliability, Safety, Testability,<br />Wiley, 1995.<br />[Kaner93]<br />C. Kaner, J. Falk, and H.Q. Nguyen, Testing Computer Software (2/e), Van<br />Nostrand Reinhold, 1993.<br />[Kaner96a]<br />Cem Kaner, “Negotiating Testing Resources: A Collaborative Approach,” a<br />position paper for the panel session on “How to Save Time and Money in<br />Testing”, in Proceedings of the Ninth International Quality Week (Software<br />Research, San Francisco, CA), 1996. (http://www.kaner.com/negotiate.htm)<br />[Kaner96b]<br />Cem Kaner, “Software Negligence & Testing Coverage,” in Proceedings of STAR<br />96, (Software Quality Engineering, Jacksonville, FL), 1996.<br />(http://www.kaner.com/coverage.htm)<br />23 See [Marick95], chapter 7, for a description of additional code coverage measures. See also [Kaner96b] for a list of<br />more than one hundred types of coverage.<br />Classic Testing Mistakes<br />23<br />[Lyu96]<br />Michael R. Lyu (ed.), Handbook of Software Reliability Engineering, McGraw-<br />Hill, 1996.<br />[Marick95]<br />Brian Marick, The Craft of Software Testing, Prentice Hall, 1995.<br />[Marick97]<br />Brian Marick, “The Test Manager at the Project Status Meeting,” in Proceedings<br />of the Tenth International Quality Week (Software Research, San Francisco, CA),<br />1997. (http://www.stlabs.com/~marick/root.htm)<br />[McConnell96]<br />Steve McConnell, Rapid Development, Microsoft Press, 1996.<br />[Moore91]<br />Geoffrey A. Moore, Crossing the Chasm, Harper Collins, 1991.<br />[Moore95]<br />Geoffrey A. Moore, Inside the Tornado, Harper Collins, 1995.<br />[Musa87]<br />J. Musa, A. Iannino, and K. Okumoto, Software Reliability : Measurement,<br />Prediction, Application, McGraw-Hill, 1987.<br />[Nielsen93]<br />Jakob Nielsen, Usability Engineering, Academic Press, 1993.<br />[Pettichord96]<br />Bret Pettichord, “Success with Test Automation,” in Proceedings of the Ninth<br />International Quality Week (Software Research, San Francisco, CA), 1996.<br />(http://www.io.com/~wazmo/succpap.htm)<br />[Rothman96]<br />Johanna Rothman, “Measurements to Reduce Risk in Product Ship Decisions,” in<br />Proceedings of the Ninth International Quality Week (Software Research, San<br />Francisco, CA), 1996. (http://world.std.com/~jr/Papers/QW96.html)<br />[Voas91]<br />J. Voas, L. Morell, and K. Miller, “Predicting Where Faults Can Hide from<br />Testing,” IEEE Software, March, 1991.<br />Classic Testing Mistakes<br />24<br />Some Classic Testing Mistakes<br />The role of testing<br />· Thinking the testing team is responsible for assuring quality.<br />· Thinking that the purpose of testing is to find bugs.<br />· Not finding the important bugs.<br />· Not reporting usability problems.<br />· No focus on an estimate of quality (and on the quality of that estimate).<br />· Reporting bug data without putting it into context.<br />· Starting testing too late (bug detection, not bug reduction)<br />Planning the complete testing effort<br />· A testing effort biased toward functional testing.<br />· Underemphasizing configuration testing.<br />· Putting stress and load testing off to the last minute.<br />· Not testing the documentation<br />· Not testing installation procedures.<br />· An overreliance on beta testing.<br />· Finishing one testing task before moving on to the next.<br />· Failing to correctly identify risky areas.<br />· Sticking stubbornly to the test plan.<br />Personnel issues<br />· Using testing as a transitional job for new programmers.<br />· Recruiting testers from the ranks of failed programmers.<br />· Testers are not domain experts.<br />· Not seeking candidates from the customer service staff or technical writing staff.<br />· Insisting that testers be able to program.<br />· A testing team that lacks diversity.<br />· A physical separation between developers and testers.<br />· Believing that programmers can’t test their own code.<br />· Programmers are neither trained nor motivated to test.<br />The tester at work<br />· Paying more attention to running tests than to designing them.<br />· Unreviewed test designs.<br />· Being too specific about test inputs and procedures.<br />· Not noticing and exploring “irrelevant” oddities.<br />· Checking that the product does what it’s supposed to do, but not that it doesn’t do<br />what it isn’t supposed to do.<br />· Test suites that are understandable only by their owners.<br />Classic Testing Mistakes<br />25<br />· Testing only through the user-visible interface.<br />· Poor bug reporting.<br />· Adding only regression tests when bugs are found.<br />· Failing to take notes for the next testing effort.<br />Test automation<br />· Attempting to automate all tests.<br />· Expecting to rerun manual tests.<br />· Using GUI capture/replay tools to reduce test creation cost.<br />· Expecting regression tests to find a high proportion of new bugs.<br />Code coverage<br />· Embracing code coverage with the devotion that only simple numbers can inspire.<br />· Removing tests from a regression test suite just because they don’t add coverage.<br />· Using coverage as a performance goal for testers.<br />· Abandoning coverage entirely.Rajesh Babu Rajamanickamhttp://www.blogger.com/profile/16036650026261051481noreply@blogger.com0tag:blogger.com,1999:blog-8180223062941193498.post-52718026812097640122008-01-14T17:10:00.001-08:002008-01-14T17:10:34.956-08:00..Test Methods...<span style="font-weight:bold;">..Test Methods...</span><br />1. What approach should be used for testing?<br />2. What are the Test Derivation Techniques?<br />3. How many different Test Types are there?<br />4. Why use Generic Test Objectives?<br />5. What are Quality Gates?<br />6. What Acceptance Criteria should be used?<br />7. Testing Metrics - Do you have examples?<br />8. Why use Test Scripts?<br />9. What tools are available for Test Support?<br />10. How-to Guides - What are they?<br />11. What are the 10 best steps for software testing?<br /><br /><br /><br /> <br /><br />1. What approach should be used for testing? <br /><br /> There are many approaches to software testing, but effective testing of complex products is essentially a process of investigation. <br /> Common quality attributes include reliability, stability, portability, maintainability and usability. <br /> Regardless of the methods used or level of formality involved the desired result of testing is a level of confidence in the software so that the developers are confident that the software has an acceptable defect rate. <br /> When changes are made to software, a regression test ensures that the changes made in the current software do not affect the functionality of the existing software. <br /> The role of highly skilled professionals in software development has never been more difficult - or more crucial - as organisations try to complete application development faster and more cost-effectively. <br /> Test teams that use manual testing exclusively are struggling to keep up. <br /> Because they cannot test all the code, they risk missing significant defects. At the same time, they cannot stop testing long enough to learn new skills. <br /> Contact us for information about automated test tools. <br /><br /> <br /><br />2. What are the Test Derivation Techniques? <br /><br /> • Equivalence partitioning <br /> • Boundary value analysis <br /> • State transition testing <br /> • Cause-effect graphing <br /> • Syntax testing <br /> • Statement testing <br /> • Branch / decision testing <br /> • Data flow testing <br /> • Branch condition testing <br /> • Branch condition combination testing <br /> • Modified condition decision testing <br /> • Business process <br /> • Requirements coverage <br /> • Use case derivation <br /> <br /><br />3. How many different Test Types are there? <br /><br /> • Archive tests <br /> • Clinical safety tests <br /> • Compatibility and conversion tests <br /> • Conformance tests <br /> • Cutover tests <br /> • Flood and volume tests <br /> • Functional tests <br /> • Installation and initialization tests <br /> • Interoperability tests <br /> • Load and stress tests <br /> • Performance tests <br /> • Portability tests <br /> • End-to-end thread testing <br /> • Recovery and restart <br /> • Documentation tests / manual procedure tests <br /> • Reliability / Robustness tests <br /> • Security tests <br /> • Temporal tests <br /> • Black box / White box tests <br /> • User interface tests / W3C WAI Accessibility testing <br /> <br /><br />4. Why use Generic Test Objectives? <br /><br /> • Demonstrate component meets requirements <br /> • Demonstrate component is ready to reuse in larger subsystems <br /> • Demonstrate integrated components are correctly assembled or combined and collaborate <br /> • Demonstrate system meets functional requirements <br /> • Demonstrate system meets non-functional requirements <br /> • Demonstrate system meets industry regulation requirements <br /> • Demonstrate supplier meets contractual obligations <br /> • Validate that system meets business or user requirements <br /> • Demonstrate system, processes, and people meet business requirements <br /> <br /><br />5. What are Quality Gates? <br /><br /> • The Quality Gate process is a formal way of specifying and recording the transition between stages in the project lifecycle <br /> • Each Quality Gate details the deliverables required and actions to be completed and metrics associated with the Quality Gate <br /> • All testing stages specify formal entry and exit criteria <br /> • The Quality Gate review process verifies the specified acceptance criteria have been achieved <br /> <br /><br />6. What Acceptance Criteria should be used?<br /> In the context of the system to be released, good enough is achieved when all of the following apply: <br /> • The release has sufficient benefits <br /> • The release has no critical problems <br /> • The benefits sufficiently outweigh the non-critical problems <br /> • In the present situation, with all things considered, delaying the release to potentially further improve the system, would cause more harm than good <br /> <br /><br />7. Testing Metrics - Do you have examples? <br /><br /> • Number of test cases <br /> • Number of tests executed <br /> • Number of tests passed <br /> • Number of tests failed <br /> • Number of re-tests <br /> • Number of Requirements tested <br /> • Number of Defects per lines of software code or per function <br /> • Number of Defects found in computer file types (e.g. jav, aspx, xml, xslt, html, com, doc) <br /> <br /> <br /><br />8. Why use Test Scripts? <br /> • Test scripts are necessary to execute repeatable tests <br /> • Can be manually executed <br /> • Can be automatically executed <br /> • Can be based on re-usable building blocks <br /> • Are a constructive component in the testing process <br /> • Provide traceability and documentation <br /> <br /> <br /><br />9. What tools are available for Test Support? <br /> • Test Asset Management Tool <br /> • Functional test tool <br /> • Non Functional test tool <br /> • Monitoring tools (for soak testing and live monitoring) <br /> • Consistent, company-wide, Defect Management Process <br /> • Repeatable Test Execution Processes <br /> • Timely Reporting <br /> • Use Cases Documentation <br /> • Test Harnesses <br /> • Common Nomenclature in use by all <br /> • How-to Guides <br /> <br /> <br /><br />10. How-to Guides - What are they? <br /> These are some of the possible How-to guides… <br /> • How-to read Use Cases <br /> • How-to scope each test <br /> • How-to determine which test types are necessary <br /> • How-to derive test conditions <br /> • How-to prepare a test planner <br /> • How-to write test cases <br /> • How-to plan for Security testing <br /> • How-to conduct WAI Accessibility testing <br /> • How-to test Service Level Agreements <br /> • How-to assess risks <br /> • How-to raise, track and manage defects <br /> • How-to create and maintain a regression test pack <br /> • How-to setup and manage User Acceptance Testing <br /> <br /><br />11. What are the 10 best steps for software testing?<br /> 1. Establish the Test Methodology you wish to follow ... E.g. ISEB <br /> 2. Establish the Test Principle ... E.g. Fail fast <br /> 3. Define the Requirements ... If there are no requirements then there is nothing to test <br /> 4. Document the Requirements Traceability matrix ... This should work in both directions <br /> 5. Define the specific tests which apply in your situation <br /> 6. Document the test plan <br /> 7. Document the test cases <br /> 8. Define the start of testing <br /> 9. Conduct testing <br /> 10. Define the point at which testing can stop ... When the benefit of continuing testing is outweighed by the effort of continuing testingRajesh Babu Rajamanickamhttp://www.blogger.com/profile/16036650026261051481noreply@blogger.com0tag:blogger.com,1999:blog-8180223062941193498.post-88995846887259629442008-01-14T17:08:00.000-08:002008-01-14T17:09:11.921-08:00creation process of a test case template.The purpose of this tutorial is to show a creation process of a test case template. Often, we create it in the wrong way, because we use the wrong field types, and this, in turn, increases the execution and maintenance process time. <br />In this tutorial I will review what works in testing and what doesn’t. I will then take the working pieces and fit them together into one template.<br />I presume that you have already read an article or a book about using the "Use Case" modeling technique. <br />If you haven’t you can find articles and tutorials by searching the Web, you can read "Writing Effective Use Cases", a book by Alistair Cockburn, or you can see my recommendations in the end of this lesson.<br />Pay attention to the extended Use Cases that can be the source for the TC’s.<br />Extended Use Cases includes: <br />• Business life cycle Use Case<br />• Supplementary specification with non-functional requirements that has:<br />• Table with all external operational variables<br />• Relative frequency of each operation<br />• Performance requirements<br />• Useful for testing UML diagrams<br />If you would like to read a book about creating TC – my suggestion would be to read "Introducing Software Testing: A Practical Guide to Getting Started" by Louise Tamres, 2002. In this book you will find a description with examples of creating test cases from use cases.<br />The information below was taken from accepted and identified sources and can be used for better understanding of my description. This information is necessary because some terms have various meanings in software testing, and I will therefore provide them to avoid misunderstanding. <br />The golden rules of software testing defined by Glenford J. Myers, [The Art of Software Testing, 1979]<br />• Testing: run program with intent to find an error <br />Test case (TC) A set of test inputs, executions, and expected results<br />developed for a particular objective.<br />Test Procedure. A document, providing detailed instructions for the [manual] execution of one or more test cases. [BS7925-1] Often called - a manual test script.<br />Test suite. A collection of test scripts or test cases that is used for validating bug fixes (or finding new bugs) within a logical or physical area of a product. [H. Q. Nguyen, 2001]<br />The test case description can be either documented manually or store in the test repository of an automated testing tool suite. If the test cases are documented automatically, the format and content will be limited to what the input forms and test repository can accept. [D.J. Mosley 2000]<br />In our case we assume that test cases must be documented manually. <br />Use Case (UC) A description of a set of sequences of actions, including variants, that a system performs that yields an observable result of value to an actor. [UML guide by G. Booch, 2001] <br /><br />In order to select the fields that we will use in our template, let us first identify all possible field choices for the TC: <br />1. Project name and test suite ID and name<br />2. Use Case Name (name is usually an action like: Create the)<br />3. Version date and version number of the test cases<br />4. Version Author and contact information<br />5. Approval and distribution list<br />6. Revision history with reasons for update<br />7. Other sources and prerequisite information<br />8. Environment prerequisites (installation and network)<br />9. Test pre-conditions (data created before testing)<br />***************************************<br />10. TC name<br />11. TC number (ID)<br />12. Use Case scenario (main success scenario, flow, path, and branching action)<br />13. Type of Software Testing (i.e. functional, load, etc.)<br />14. Objectives<br />15. Initial conditions or preconditions<br />16. Valid or invalid conditions <br />17. Input data (ID, type, and values)<br />18. Test steps<br />19. Expected result <br />20. Clean up or post conditions<br />21. Comments<br />****************************************<br />22. Actual results (passed/fail)<br />23. Date<br />24. Tester<br />25. Type of machine<br />26. Type of OS etc.<br />27. Build release<br />28. Label name<br />29. Date of release<br />1. Select all fields that will be used in the Test Log document. From my experience, MS Excel is the best for a Test Log. The following fields are usually used in a test log document, but these fields sometimes mistakenly appear in the test case template.<br />o Actual results (passed/fail)<br />o Date<br />o Tester<br />o Type of machine<br />o Type of OS<br />o Build release<br />o Label name<br />o Date of release <br />2. Now select all fields that belong to the test suite and do not depend on small details. We will assume that for each use case, we will create a number of test cases in a separate test suite document. This information can be provided in the beginning of the test suite document:<br />o Project name and test suite ID and name<br />o Use Case Name (name is usually an action like: Create a…)<br />o Version date and version number of the test cases<br />o Version Author and contact information<br />o Approval and distribution list<br />o Revision history with reasons for update<br />o Environment prerequisites (installation and network)<br />o Test pre-conditions (data created before testing)<br />o Other sources and prerequisites information<br />o Clean up or post conditions<br />3. Choose all the necessary fields for the TC template from the remaining list:<br />1. TC name<br />2. TC number (ID)<br />3. Use Case scenario (main success scenario, flow etc.)<br />4. Type of Testing.<br />5. Objectives<br />6. Initial conditions or preconditions<br />7. Valid or invalid conditions (use the word Verify for valid conditions and Attempt to for TC with invalid data. This will help simplify verification and maintenance)<br />8. Input data <br />9. Test steps<br />10. Expected result <br />11. Comments<br />Let us choose only the necessary fields and combine some information like TC number, type of test, and project name in one field of template.<br />Remember: Adding additional fields to the template increases the amount of work to create and maintain the test suite. The project cost raises as well. Keep in mind that the same rules apply to the test suite and a test log document.<br />1. Test suite name; TC name; TC number (ID); type of testing;<br />2. Use Case scenario (main success scenario, flow etc.)<br />3. Objectives<br />4. Initial conditions or preconditions<br />5. Valid or invalid conditions (when it is possible, begin your description with the word Verify for valid conditions and input data and Attempt to for invalid. This will help you to simplify verification and maintenance of TC’s.)<br />6. Input data (ID, type, and values) <br />7. Test steps<br />8. Expected result <br />If you plan to use automation testing tools in the future, please review the following steps:<br />o Perform setup<br />o Perform the test<br />o Verify the results<br />o Log the results<br />o Handle unpredictable situations<br />o Decide to stop or continue the test case<br />o Perform cleanup<br />[D.J. Mosley 2000/2002]<br />I can’t resist to remind you Cem Kaner’s good practices of designing TC’s before showing the samples of templates. (More detailed description of creating a good TC may be the topic of a separate book.)<br />An excellent test case satisfies the following criteria:<br />• Reasonable probability of catching an error<br />• Exercises an area of interest<br />• Does interesting things<br />• Doesn’t do unnecessary things<br />• Neither too simple nor too complex<br />• Not redundant with other tests<br />• Makes failures obvious<br />• Allows isolation and identification of errors<br />[Cem Kaner " Black Box Software Testing -Professional seminar " 2002 section 8 "Test case design"]<br />Scripting:<br />An Industry Worst Practice<br />COMPLETE SCRIPTING is favored by people who believe that repeatability is everything and who believe that with repeatable scripts, we can delegate to cheap labor.<br />[Cem Kaner "Black Box Software Testing -Professional seminar " 2002 section 23 "scripting"]<br />The following are samples of templates with the fields that we previously chose. <br />For each unique test case number, I chose the following format: <br />XXX.XXX.-XXX<br />The description is:<br />XXX. XXX.- XXX<br />Name of the project (abbreviation) Type of testing Unique number.<br />If you are not using Use case modeling technique you can rename Use Case flow field into " Requirement under the Test".<br />Blank template:<br />TC # UC flow <br />Objectives <br /> <br /> <br />Preconditions Input (maybe for different conditions) Expected Results<br /> <br /> <br /> <br />Guidance for creating text in a template. <br /> <br />TC# Proj.Fun.-010 UC flow 2.2.2 main success scenario (Basic, alternative, exception flow name or function under test)<br />Objectives Try to use:<br />-Verify that (for TC with valid data) <br />-Attempt to (for TC with invalid data)<br />Preconditions Input Expected Results<br />-The system displays…<br />-User has successfully…<br />-The system allows… <br />-The user has been authenticated… (For different conditions where applicable)<br />-The user selects…<br />-The user enters… -Expected result may be copy-paste from Use Case but it depends on how the Use Case is written.<br />I'm giving the best advice I have. You have to decide what is suitable for your needs and modify template accordingly.Rajesh Babu Rajamanickamhttp://www.blogger.com/profile/16036650026261051481noreply@blogger.com0tag:blogger.com,1999:blog-8180223062941193498.post-30460975072148250012008-01-14T17:06:00.000-08:002008-01-14T17:07:32.560-08:00MANUAL TESTING - New<span style="font-weight:bold;"> MANUAL TESTING</span><br /><br />1. What is the testing process? <br />Verifying that an input data produce the expected output. <br />2. What is the difference between testing and debugging? <br />Big difference is that debugging is conducted by a programmer and the programmer fix the errors during debugging phase. Tester never fixes the errors, but rather fined them and return to programmer. <br />3. What is the difference between structural and functional testing? <br />Structural is a "white box" testing and based on the algorithm or code. Functional testing is a "black box" (behavioral) testing where the tester verifies the functional specification.<br />4. What is a bug? What types of bugs do you know? <br />Bug is a error during execution of the program. There are two types of bugs: syntax and logical. <br />5. What is the difference between testing and quality assurance (QA)? <br />This question is surprisingly popular. However, the answer is quite simple. The goals of both are different: The goal of testing is to find the errors. The goal of QA is to prevent the errors in the program. <br />6. What kinds of testing do you know? What is it system testing? What is it integration testing? What is a unit testing? What is a regression testing? <br />You theoretical background and home work may shine in this question. System testing is a testing of the entire system as a whole. This is what user see and feels about the product you provide. Integration testing is the testing of integration of different modules of the system. Usually, the integration process is quite painful and this testing is the most serious one of all. Integration testing comes before system testing. Unit testing is a testing of a single unit (module) of within system. It's conducted before integration testing. Regression testing is a "backward check" testing. The idea to ensure that new functionality added to the system did not break old, checked, functionality of the system. <br />7. What are all the major processes will involve in testing?<br />The major processes include: <br />1.Planning (test strategy, test objectives, risk management) <br />2.Design (functions to be tested, test scenario, test cases) <br />3Development (test procedures, test scripts, test environment) <br />4.Execution (execute test) <br />5.Evaluation (evaluate test results, compare actual results with expected results) <br />8. Could you test a program 100%? 90%? Why? <br />Definitely not! The major problem with testing that you cannot calculate how many errors are in the code, functioning etc. There are many factors involved such as experience of programmer, complexity of the system etc. <br />9. How would you test a mug (chair/table/gas station etc.)? <br />First of all you must demand requirements and functional specification and design document of the mug. There will find requirements like ability to hold hot water, waterproof, stability, break ability and so on. Then you should test the mug according to all documents. <br />10. How would you conduct your test? <br />Each test is based on the technical requirements of the software product.<br />11.What is the other name for white box testing?<br />Clear box testing<br />12.What is other name for water fall model?<br />Linear sequential model<br />13.What is considered a good test?<br />It should cover most of the object's functionality<br />14.What is 'Software Quality Assurance'? <br />Software QA involves the entire software development PROCESS - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with.<br />15.What is 'Software Testing'? <br />Testing involves operation of a system or application under controlled conditions and evaluating the results (eg, 'if the user is in interface A of the application while using hardware B, and does C, then D should happen'). The controlled conditions should include both normal and abnormal conditions. Testing should intentionally attempt to make things go wrong to determine if things happen when they shouldn't or things don't happen when they should. <br />16.What are some recent major computer system failures caused by software bugs? <br />• In March of 2002 it was reported that software bugs in Britain's national tax system resulted in more than 100,000 erroneous tax overcharges. The problem was partly attibuted to the difficulty of testing the integration of multiple systems. <br />• A newspaper columnist reported in July 2001 that a serious flaw was found in off-the-shelf software that had long been used in systems for tracking certain U.S. nuclear materials. The same software had been recently donated to another country to be used in tracking their own nuclear materials, and it was not until scientists in that country discovered the problem, and shared the information, that U.S. officials became aware of the problems. <br />• According to newspaper stories in mid-2001, a major systems development contractor was fired and sued over problems with a large retirement plan management system. According to the reports, the client claimed that system deliveries were late, the software had excessive defects, and it caused other systems to crash. <br />• In January of 2001 newspapers reported that a major European railroad was hit by the aftereffects of the Y2K bug. The company found that many of their newer trains would not run due to their inability to recognize the date '31/12/2000'; the trains were started by altering the control system's date settings. <br />• News reports in September of 2000 told of a software vendor settling a lawsuit with a large mortgage lender; the vendor had reportedly delivered an online mortgage processing system that did not meet specifications, was delivered late, and didn't work. <br />• In early 2000, major problems were reported with a new computer system in a large suburban U.S. public school district with 100,000+ students; problems included 10,000 erroneous report cards and students left stranded by failed class registration systems; the district's CIO was fired. The school district decided to reinstate it's original 25-year old system for at least a year until the bugs were worked out of the new system by the software vendors. <br />• In October of 1999 the $125 million NASA Mars Climate Orbiter spacecraft was believed to be lost in space due to a simple data conversion error. It was determined that spacecraft software used certain data in English units that should have been in metric units. Among other tasks, the orbiter was to serve as a communications relay for the Mars Polar Lander mission, which failed for unknown reasons in December 1999. Several investigating panels were convened to determine the process failures that allowed the error to go undetected. <br />• Bugs in software supporting a large commercial high-speed data network affected 70,000 business customers over a period of 8 days in August of 1999. Among those affected was the electronic trading system of the largest U.S. futures exchange, which was shut down for most of a week as a result of the outages. <br />• In April of 1999 a software bug caused the failure of a $1.2 billion military satellite launch, the costliest unmanned accident in the history of Cape Canaveral launches. The failure was the latest in a string of launch failures, triggering a complete military and industry review of U.S. space launch programs, including software integration and testing processes. Congressional oversight hearings were requested. <br />• A small town in Illinois received an unusually large monthly electric bill of $7 million in March of 1999. This was about 700 times larger than its normal bill. It turned out to be due to bugs in new software that had been purchased by the local power company to deal with Y2K software issues. <br />• In early 1999 a major computer game company recalled all copies of a popular new product due to software problems. The company made a public apology for releasing a product before it was ready. <br />• The computer system of a major online U.S. stock trading service failed during trading hours several times over a period of days in February of 1999 according to nationwide news reports. The problem was reportedly due to bugs in a software upgrade intended to speed online trade confirmations. <br />• In April of 1998 a major U.S. data communications network failed for 24 hours, crippling a large part of some U.S. credit card transaction authorization systems as well as other large U.S. bank, retail, and government data systems. The cause was eventually traced to a software bug. <br />• January 1998 news reports told of software problems at a major U.S. telecommunications company that resulted in no charges for long distance calls for a month for 400,000 customers. The problem went undetected until customers called up with questions about their bills. <br />• In November of 1997 the stock of a major health industry company dropped 60% due to reports of failures in computer billing systems, problems with a large database conversion, and inadequate software testing. It was reported that more than $100,000,000 in receivables had to be written off and that multi-million dollar fines were levied on the company by government agencies. <br />• A retail store chain filed suit in August of 1997 against a transaction processing system vendor (not a credit card company) due to the software's inability to handle credit cards with year 2000 expiration dates. <br />• In August of 1997 one of the leading consumer credit reporting companies reportedly shut down their new public web site after less than two days of operation due to software problems. The new site allowed web site visitors instant access, for a small fee, to their personal credit reports. However, a number of initial users ended up viewing each others' reports instead of their own, resulting in irate customers and nationwide publicity. The problem was attributed to "...unexpectedly high demand from consumers and faulty software that routed the files to the wrong computers." <br />• In November of 1996, newspapers reported that software bugs caused the 411 telephone information system of one of the U.S. RBOC's to fail for most of a day. Most of the 2000 operators had to search through phone books instead of using their 13,000,000-listing database. The bugs were introduced by new software modifications and the problem software had been installed on both the production and backup systems. A spokesman for the software vendor reportedly stated that 'It had nothing to do with the integrity of the software. It was human error.' <br />• On June 4 1996 the first flight of the European Space Agency's new Ariane 5 rocket failed shortly after launching, resulting in an estimated uninsured loss of a half billion dollars. It was reportedly due to the lack of exception handling of a floating-point error in a conversion from a 64-bit integer to a 16-bit signed integer. <br />• Software bugs caused the bank accounts of 823 customers of a major U.S. bank to be credited with $924,844,208.32 each in May of 1996, according to newspaper reports. The American Bankers Association claimed it was the largest such error in banking history. A bank spokesman said the programming errors were corrected and all funds were recovered. <br />• Software bugs in a Soviet early-warning monitoring system nearly brought on nuclear war in 1983, according to news reports in early 1999. The software was supposed to filter out false missile detections caused by Soviet satellites picking up sunlight reflections off cloud-tops, but failed to do so. Disaster was averted when a Soviet commander, based on a what he said was a '...funny feeling in my gut', decided the apparent missile attack was a false alarm. The filtering software code was rewritten. <br />17.Why is it often hard for management to get serious about quality assurance? <br />Solving problems is a high-visibility process; preventing problems is low-visibility. This is illustrated by an old parable:<br />In ancient China there was a family of healers, one of whom was known throughout the land and employed as a physician to a great lord. The physician was asked which of his family was the most skillful healer. He replied, <br />"I tend to the sick and dying with drastic and dramatic treatments, and on occasion someone is cured and my name gets out among the lords."<br />"My elder brother cures sickness when it just begins to take root, and his skills are known among the local peasants and neighbors." <br />"My eldest brother is able to sense the spirit of sickness and eradicate it before it takes form. His name is unknown outside our home." <br />18.Why does software have bugs? <br />•miscommunication or no communication - as to specifics of what an application should or shouldn't do (the application's requirements). <br />•software complexity - the complexity of current software applications can be difficult to comprehend for anyone without experience in modern-day software development. Windows-type interfaces, client-server and distributed applications, data communications, enormous relational databases, and sheer size of applications have all contributed to the exponential growth in software/system complexity. And the use of object-oriented techniques can complicate instead of simplify a project unless it is well-engineered. <br />•programming errors - programmers, like anyone else, can make mistakes. <br />• changing requirements - the customer may not understand the effects of changes, or may understand and request them anyway - redesign, rescheduling of engineers, effects on other projects, work already completed that may have to be redone or thrown out, hardware requirements that may be affected, etc. If there are many minor changes or any major changes, known and unknown dependencies among parts of the project are likely to interact and cause problems, and the complexity of keeping track of changes may result in errors. Enthusiasm of engineering staff may be affected. In some fast-changing business environments, continuously modified requirements may be a fact of life. In this case, management must understand the resulting risks, and QA and test engineers must adapt and plan for continuous extensive testing to keep the inevitable bugs from running out of control .<br />• time pressures - scheduling of software projects is difficult at best, often requiring a lot of guesswork. When deadlines loom and the crunch comes, mistakes will be made. <br />• egos - people prefer to say things like: <br />'no problem' 'piece of cake'<br />'I can whip that out in a few hours'<br />'it should be easy to update that old code'<br />instead of:'that adds a lot of complexity and we could end up<br />making a lot of mistakes'<br />'we have no idea if we can do that; we'll wing it'<br />'I can't estimate how long it will take, until I<br />take a close look at it'<br />'we can't figure out what that old spaghetti code<br />did in the first place'<br />If there are too many unrealistic 'no problem's', the<br />result is bugs.<br />• poorly documented code - it's tough to maintain and modify code that is badly written or poorly documented; the result is bugs. In many organizations management provides no incentive for programmers to document their code or write clear, understandable code. In fact, it's usually the opposite: they get points mostly for quickly turning out code, and there's job security if nobody else can understand it ('if it was hard to write, it should be hard to read'). <br />•software development tools - visual tools, class libraries, compilers, scripting tools, etc. often introduce their own bugs or are poorly documented, resulting in added bugs. <br />19.How can new Software QA processes be introduced in an existing organization? <br />•A lot depends on the size of the organization and the risks involved. For large organizations with high-risk (in terms of lives or property) projects, serious management buy-in is required and a formalized QA process is necessary. <br />•Where the risk is lower, management and organizational buy-in and QA implementation may be a slower, step-at-a-time process. QA processes should be balanced with productivity so as to keep bureaucracy from getting out of hand. <br />•For small groups or projects, a more ad-hoc process may be appropriate, depending on the type of customers and projects. A lot will depend on team leads or managers, feedback to developers, and ensuring adequate communications among customers, managers, developers, and testers. <br />•In all cases the most value for effort will be in requirements management processes, with a goal of clear, complete, testable requirement specifications or expectations. <br />20.What is verification? validation? <br />Verification typically involves reviews and meetings to evaluate documents, plans, code, requirements, and specifications. This can be done with checklists, issues lists, walkthroughs, and inspection meetings. Validation typically involves actual testing and takes place after verifications are completed. The term 'IV & V' refers to Independent Verification and Validation.<br />21.What is a 'walkthrough'? <br />A 'walkthrough' is an informal meeting for evaluation or informational purposes. Little or no preparation is usually required. <br />22.What's an 'inspection'? <br />An inspection is more formalized than a 'walkthrough', typically with 3-8 people including a moderator, reader, and a recorder to take notes. The subject of the inspection is typically a document such as a requirements spec or a test plan, and the purpose is to find problems and see what's missing, not to fix anything. Attendees should prepare for this type of meeting by reading thru the document; most problems will be found during this preparation. The result of the inspection meeting should be a written report. Thorough preparation for inspections is difficult, painstaking work, but is one of the most cost effective methods of ensuring quality. Employees who are most skilled at inspections are like the 'eldest brother' in the parable in Their skill may have low visibility but they are extremely valuable to any software development organization, since bug prevention is far more cost-effective than bug detection. <br />23.What kinds of testing should be considered? <br />•Black box testing - not based on any knowledge of internal design or code. Tests are based on requirements and functionality. <br />•White box testing - based on knowledge of the internal logic of an application's code. Tests are based on coverage of code statements, branches, paths, conditions. <br />•unit testing - the most 'micro' scale of testing; to test particular functions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code; may require developing test driver modules or test harnesses. <br />•incremental integration testing - continuous testing of an application as new functionality is added; requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers. <br />•integration testing - testing of combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems. <br />•functional testing - black-box type testing geared to functional requirements of an application; this type of testing should be done by testers. This doesn't mean that the programmers shouldn't check that their code works before releasing it (which of course applies to any stage of testing.) <br />•system testing - black-box type testing that is based on overall requirements specifications; covers all combined parts of a system. <br />•end-to-end testing - similar to system testing; the 'macro' end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate. <br />•sanity testing - typically an initial testing effort to determine if a new software version is performing<br />•well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or destroying databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state. <br />•regression testing - re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing. <br />•acceptance testing - final testing based on specifications of the end-user or customer, or based on use by end-users/customers over some limited period of time. <br />•load testing - testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system's response time degrades or fails. <br />•stress testing - term often used interchangeably with 'load' and 'performance' testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc. <br />•performance testing - term often used interchangeably with 'stress' and 'load' testing. Ideally 'performance' testing (and any other 'type' of testing) is defined in requirements documentation or QA or Test Plans. <br />•usability testing - testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers. <br />•install/uninstall testing - testing of full, partial, or upgrade install/uninstall processes. <br />•recovery testing - testing how well a system recovers from crashes, hardware failures, or other catastrophic problems. <br />•security testing - testing how well the system protects against unauthorized internal or external access, willful damage, etc; may require sophisticated testing techniques. <br />•compatability testing - testing how well software performs in a particular hardware/software/operating system/network/etc. environment. <br />•exploratory testing - often taken to mean a creative, informal software test that is not based on formal test plans or test cases; testers may be learning the software as they test it. <br />•ad-hoc testing - similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software before testing it. <br />•user acceptance testing - determining if software is satisfactory to an end-user or customer. <br />•comparison testing - comparing software weaknesses and strengths to competing products. <br />•alpha testing - testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers. <br />•beta testing - testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers. <br />•mutation testing - a method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes ('bugs') and retesting with the original test data/cases to determine if the 'bugs' are detected. Proper implementation requires large computational resources. <br />24.What are 5 common problems in the software development process? <br />•poor requirements - if requirements are unclear, incomplete, too general, or not testable, there will be problems. <br />•unrealistic schedule - if too much work is crammed in too little time, problems are inevitable. <br />•inadequate testing - no one will know whether or not the program is any good until the customer complains or systems crash. <br />•featuritis - requests to pile on new features after development is underway; extremely common. <br />•miscommunication - if developers don't know what's needed or customer's have erroneous expectations, problems are guaranteed. <br />25.What are 5 common solutions to software development problems? <br />•solid requirements - clear, complete, detailed, cohesive, attainable, testable requirements that are agreed to by all players. Use prototypes to help nail down requirements. <br />•realistic schedules - allow adequate time for planning, design, testing, bug fixing, re-testing, changes, and documentation; personnel should be able to complete the project without burning out. <br />•adequate testing - start testing early on, re-test after fixes or changes, plan for adequate time for testing and bug-fixing. <br />•stick to initial requirements as much as possible - be prepared to defend against changes and additions once development has begun, and be prepared to explain consequences. If changes are necessary, they should be adequately reflected in related schedule changes. If possible, use rapid prototyping during the design phase so that customers can see what to expect. This will provide them a higher comfort level with their requirements decisions and minimize changes later on. <br />•communication - require walkthroughs and inspections when appropriate; make extensive use of group communication tools - e-mail, groupware, networked bug- tracking tools and change management tools, intranet capabilities, etc.; insure that documentation is available and up-to-date - preferably electronic, not paper; promote teamwork and cooperation; use protoypes early on so that customers' expectations are clarified. <br />26.What is software 'quality'? <br />Quality software is reasonably bug-free, delivered on time and within budget, meets requirements and/or expectations, and is maintainable. However, quality is obviously a subjective term. It will depend on who the 'customer' is and their overall influence in the scheme of things. A wide-angle view of the 'customers' of a software development project might include end-users, customer acceptance testers, customer contract officers, customer management, the development organization's management/accountants/testers/salespeople, future software maintenance engineers, stockholders, magazine columnists, etc. Each type of 'customer' will have their own slant on 'quality' - the accounting department might define quality in terms of profits while an end-user might define quality as user-friendly and bug-free. <br />27.What is 'good code'? <br />'Good code' is code that works, is bug free, and is readable and maintainable. Some organizations have coding 'standards' that all developers are supposed to adhere to, but everyone has different ideas about what's best, or what is too many or too few rules. There are also various theories and metrics, such as McCabe Complexity metrics. It should be kept in mind that excessive use of standards and rules can stifle productivity and creativity. 'Peer reviews', 'buddy checks' code analysis tools, etc. can be used to check for problems and enforce standards. <br />For C and C++ coding, here are some typical ideas to consider in setting rules/standards; these may or may not apply to a particular situation: <br />•minimize or eliminate use of global variables. <br />•use descriptive function and method names - use both upper and lower case, avoid abbreviations, use as many characters as necessary to be adequately descriptive (use of more than 20 characters is not out of line); be consistent in naming conventions. <br />•use descriptive variable names - use both upper and lower case, avoid abbreviations, use as many characters as necessary to be adequately descriptive (use of more than 20 characters is not out of line); be consistent in naming conventions. <br />•function and method sizes should be minimized; less than 100 lines of code is good, less than 50 lines is preferable. <br />•function descriptions should be clearly spelled out in comments preceding a function's code. <br />•organize code for readability. <br />•use whitespace generously - vertically and horizontally <br />•each line of code should contain 70 characters max. <br />•one code statement per line. <br />•coding style should be consistent throught a program (eg, use of brackets, indentations, naming conventions, etc.) <br />•in adding comments, err on the side of too many rather than too few comments; a common rule of thumb is that there should be at least as many lines of comments (including header blocks) as lines of code. <br />•no matter how small, an application should include documentaion of the overall program function and flow (even a few paragraphs is better than nothing); or if possible a separate flow chart and detailed program documentation. <br />•make extensive use of error handling procedures and status and error logging. <br />•for C++, to minimize complexity and increase maintainability, avoid too many levels of inheritance in class heirarchies (relative to the size and complexity of the application). Minimize use of multiple inheritance, and minimize use of operator overloading (note that the Java programming language eliminates multiple inheritance and operator overloading.) <br />• for C++, keep class methods small, less than 50 lines of code per method is preferable. <br />• for C++, make liberal use of exception handlers <br />28.What is 'good design'? <br />'Design' could refer to many things, but often refers to 'functional design' or 'internal design'. Good internal design is indicated by software code whose overall structure is clear, understandable, easily modifiable, and maintainable; is robust with sufficient error-handling and status logging capability; and works correctly when implemented. Good functional design is indicated by an application whose functionality can be traced back to customer and end-user requirements.For programs that have a user interface, it's often a good idea to assume that the end user will have little computer knowledge and may not read a user manual or even the on-line help; some common rules-of-thumb include: <br />• the program should act in a way that least surprises the user <br />• it should always be evident to the user what can be done next and how to exit <br />• the program shouldn't let the users do something stupid without warning them. <br />29.What is SEI? CMM? ISO? IEEE? ANSI? Will it help? <br />•SEI = 'Software Engineering Institute' at Carnegie-Mellon University; initiated by the U.S. Defense Department to help improve software development processes. <br />•CMM = 'Capability Maturity Model', developed by the SEI. It's a model of 5 levels of organizational 'maturity' that determine effectiveness in delivering quality software. It is geared to large organizations such as large U.S. Defense Department contractors. However, many of the QA processes involved are appropriate to any organization, and if reasonably applied can be helpful. Organizations can receive CMM ratings by undergoing assessments by qualified auditors. <br />Level 1 - characterized by chaos, periodic panics, and heroic<br />efforts required by individuals to successfully<br />complete projects. Few if any processes in place;<br />successes may not be repeatable.<br />Level 2 - software project tracking, requirements management,<br />realistic planning, and configuration management<br />processes are in place; successful practices can<br />be repeated.<br />Level 3 - standard software development and maintenance processes<br />are integrated throughout an organization; a Software<br />Engineering Process Group is is in place to oversee<br />software processes, and training programs are used to<br />ensure understanding and compliance.<br />Level 4 - metrics are used to track productivity, processes,<br />and products. Project performance is predictable,<br />and quality is consistently high.<br />Level 5 - the focus is on continouous process improvement. The<br />impact of new processes and technologies can be<br />predicted and effectively implemented when required.<br />Perspective on CMM ratings: During 1997-2001, 1018 organizations<br />were assessed. Of those, 27% were rated at Level 1, 39% at 2,<br />23% at 3, 6% at 4, and 5% at 5. (For ratings during the period <br />1992-96, 62% were at Level 1, 23% at 2, 13% at 3, 2% at 4, and <br />0.4% at 5.) The median size of organizations was 100 software <br />engineering/maintenance personnel; 32% of organizations were <br />U.S. federal contractors or agencies. For those rated at <br />Level 1, the most problematical key process area was in <br />Software Quality Assurance.<br />• ISO = 'International Organisation for Standardization' - The ISO 9001:2000 standard (which replaces the previous standard of 1994) concerns quality systems that are assessed by outside auditors, and it applies to many kinds of production and manufacturing organizations, not just software. It covers documentation, design, development, production, testing, installation, servicing, and other processes. The full set of standards consists of: (a)Q9001-2000 - Quality Management Systems: Requirements; (b)Q9000-2000 - Quality Management Systems: Fundamentals and Vocabulary; (c)Q9004-2000 - Quality Management Systems: Guidelines for Performance Improvements. To be ISO 9001 certified, a third-party auditor assesses an organization, and certification is typically good for about 3 years, after which a complete reassessment is required. Note that ISO certification does not necessarily indicate quality products - it indicates only that documented processes are followed.<br />• IEEE = 'Institute of Electrical and Electronics Engineers' - among other things, creates standards such as 'IEEE Standard for Software Test Documentation' (IEEE/ANSI Standard 829), 'IEEE Standard of Software Unit Testing (IEEE/ANSI Standard 1008), 'IEEE Standard for Software Quality Assurance Plans' (IEEE/ANSI Standard 730), and others. <br />• ANSI = 'American National Standards Institute', the primary industrial standards body in the U.S.; publishes some software-related standards in conjunction with the IEEE and ASQ (American Society for Quality). <br />• Other software development process assessment methods besides CMM and ISO 9000 include SPICE, Trillium, TickIT. and Bootstrap. <br />30. What is the 'software life cycle'? <br />The life cycle begins when an application is first conceived and ends when it is no longer in use. It includes aspects such as initial concept, requirements analysis, functional design, internal design, documentation planning, test planning, coding, document preparation, integration, testing, maintenance, updates, retesting, phase-out, and other aspects<br />31.Will automated testing tools make testing easier? <br />•Possibly. For small projects, the time needed to learn and implement them may not be worth it. For larger projects, or on-going long-term projects they can be valuable. <br />•A common type of automated tool is the 'record/playback' type. For example, a tester could click through all combinations of menu choices, dialog box choices, buttons, etc. in an application GUI and have them 'recorded' and the results logged by a tool. The 'recording' is typically in the form of text based on a scripting language that is interpretable by the testing tool. If new buttons are added, or some underlying code in the application is changed, etc. the application can then be retested by just 'playing back' the 'recorded' actions, and comparing the logging results to check effects of the changes. The problem with such tools is that if there are continual changes to the system being tested, the 'recordings' may have to be changed so much that it becomes very time-consuming to continuously update the scripts. Additionally, interpretation of results (screens, data, logs, etc.) can be a difficult task. Note that there are record/playback tools for text-based interfaces also, and for all types of platforms. <br />•Other automated tools can include: <br />code analyzers - monitor code complexity, adherence to<br />standards, etc.<br />coverage analyzers - these tools check which parts of the<br />code have been exercised by a test, and may<br />be oriented to code statement coverage,<br />condition coverage, path coverage, etc.<br />memory analyzers - such as bounds-checkers and leak detectors.<br />load/performance test tools - for testing client/server<br />and web applications under various load<br />levels.<br />web test tools - to check that links are valid, HTML code<br />usage is correct, client-side and<br />server-side programs work, a web site's<br />interactions are secure.<br />other tools - for test case management, documentation<br />management, bug reporting, and configuration<br />management.<br /><br />32.What makes a good test engineer?<br />A good test engineer has a 'test to break' attitude, an ability to take the point of view of the customer, a strong desire for quality, and an attention to detail. Tact and diplomacy are useful in maintaining a cooperative relationship with developers, and an ability to communicate with both technical (developers) and non-technical (customers, management) people is useful. Previous software development experience can be helpful as it provides a deeper understanding of the software development process, gives the tester an appreciation for the developers' point of view, and reduce the learning curve in automated test tool programming. Judgement skills are needed to assess high-risk areas of an application on which to focus testing efforts when time is limited.<br />33.What makes a good Software QA engineer? <br />The same qualities a good tester has are useful for a QA engineer. Additionally, they must be able to understand the entire software development process and how it can fit into the business approach and goals of the organization. Communication skills and the ability to understand various sides of issues are important. In organizations in the early stages of implementing QA processes, patience and diplomacy are especially needed. An ability to find problems as well as to see 'what's missing' is important for inspections and reviews. <br />34.What makes a good QA or Test manager? <br />A good QA, test, or QA/Test(combined) manager should: <br />• be familiar with the software development process <br />• be able to maintain enthusiasm of their team and promote a positive atmosphere, despite what is a somewhat 'negative' process (e.g., looking for or preventing problems) <br />• be able to promote teamwork to increase productivity <br />• be able to promote cooperation between software, test, and QA engineers <br />• have the diplomatic skills needed to promote improvements in QA processes <br />• have the ability to withstand pressures and say 'no' to other managers when quality is insufficient or QA processes are not being adhered to <br />• have people judgement skills for hiring and keeping skilled personnel <br />• be able to communicate with technical and non-technical people, engineers, managers, and customers. <br />• be able to run meetings and keep them focused <br />35.What's the role of documentation in QA? <br />Critical. (Note that documentation can be electronic, not necessarily paper.) QA practices should be documented such that they are repeatable. Specifications, designs, business rules, inspection reports, configurations, code changes, test plans, test cases, bug reports, user manuals, etc. should all be documented. There should ideally be a system for easily finding and obtaining documents and determining what documentation will have a particular piece of information. Change management for documentation should be used if possible. <br />36.What's the big deal about 'requirements'? <br />One of the most reliable methods of insuring problems, or failure, in a complex software project is to have poorly documented requirements specifications. Requirements are the details describing an application's externally-perceived functionality and properties. Requirements should be clear, complete, reasonably detailed, cohesive, attainable, and testable. A non-testable requirement would be, for example, 'user-friendly' (too subjective). A testable requirement would be something like 'the user must enter their previously-assigned password to access the application'. Determining and organizing requirements details in a useful and efficient way can be a difficult effort; different methods are available depending on the particular project. Many books are available that describe various approaches to this task.<br />Care should be taken to involve ALL of a project's significant 'customers' in the requirements process. 'Customers' could be in-house personnel or out, and could include end-users, customer acceptance testers, customer contract officers, customer management, future software maintenance engineers, salespeople, etc. Anyone who could later derail the project if their expectations aren't met should be included if possible. <br />Organizations vary considerably in their handling of requirements specifications. Ideally, the requirements are spelled out in a document with statements such as 'The product shall.....'. 'Design' specifications should not be confused with 'requirements'; design specifications should be traceable back to the requirements. <br />In some organizations requirements may end up in high level project plans, functional specification documents, in design documents, or in other documents at various levels of detail. No matter what they are called, some type of documentation with detailed requirements will be needed by testers in order to properly plan and execute tests. Without such documentation, there will be no clear-cut way to determine if a software application is performing correctly. <br />37.What steps are needed to develop and run software tests? <br />The following are some of the steps to consider: <br />•Obtain requirements, functional design, and internal design specifications and other necessary documents <br />•Obtain budget and schedule requirements <br />•Determine project-related personnel and their responsibilities, reporting requirements, required standards and processes (such as release processes, change processes, etc.) <br />•Identify application's higher-risk aspects, set priorities, and determine scope and limitations of tests <br />•Determine test approaches and methods - unit, integration, functional, system, load, usability tests, etc. <br />•Determine test environment requirements (hardware, software, communications, etc.) <br />•Determine testware requirements (record/playback tools, coverage analyzers, test tracking, problem/bug tracking, etc.) <br />•Determine test input data requirements <br />•Identify tasks, those responsible for tasks, and labor requirements <br />•Set schedule estimates, timelines, milestones <br />•Determine input equivalence classes, boundary value analyses, error classes <br />•Prepare test plan document and have needed reviews/approvals <br />•Write test cases <br />•Have needed reviews/inspections/approvals of test cases <br />•Prepare test environment and testware, obtain needed user manuals/reference documents/configuration guides/installation guides, set up test tracking processes, set up logging and archiving processes, set up or obtain test input data <br />•Obtain and install software releases <br />•Perform tests <br />•Evaluate and report results <br />•Track problems/bugs and fixes <br />•Retest as needed <br />•Maintain and update test plans, test cases, test environment, and testware through life cycle <br />38.What's a 'test plan'? <br />A software project test plan is a document that describes the objectives, scope, approach, and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the test group understand the 'why' and 'how' of product validation. It should be thorough enough to be useful but not so thorough that no one outside the test group will read it. The following are some of the items that might be included in a test plan, depending on the particular project: <br />•Title <br />•Identification of software including version/release numbers <br />•Revision history of document including authors, dates, approvals <br />•Table of Contents <br />•Purpose of document, intended audience <br />•Objective of testing effort <br />•Software product overview <br />•Relevant related document list, such as requirements, design documents, other test plans, etc. <br />•Relevant standards or legal requirements <br />•Traceability requirements <br />•Relevant naming conventions and identifier conventions <br />•Overall software project organization and personnel/contact-info/responsibilties <br />•Test organization and personnel/contact-info/responsibilities <br />•Assumptions and dependencies <br />•Project risk analysis <br />•Testing priorities and focus <br />•Scope and limitations of testing <br />•Test outline - a decomposition of the test approach by test type, feature, functionality, process, system, module, etc. as applicable <br />•Outline of data input equivalence classes, boundary value analysis, error classes <br />•Test environment - hardware, operating systems, other required software, data configurations, interfaces to other systems <br />•Test environment validity analysis - differences between the test and production systems and their impact on test validity. <br />•Test environment setup and configuration issues <br />•Software migration processes <br />•Software CM processes <br />•Test data setup requirements <br />•Database setup requirements <br />•Outline of system-logging/error-logging/other capabilities, and tools such as screen capture software, that will be used to help describe and report bugs <br />•Discussion of any specialized software or hardware tools that will be used by testers to help track the cause or source of bugs <br />•Test automation - justification and overview <br />•Test tools to be used, including versions, patches, etc. <br />•Test script/test code maintenance processes and version control <br />•Problem tracking and resolution - tools and processes <br />•Project test metrics to be used <br />•Reporting requirements and testing deliverables <br />•Software entrance and exit criteria <br />•Initial sanity testing period and criteria <br />•Test suspension and restart criteria <br />•Personnel allocation <br />•Personnel pre-training needs <br />•Test site/location <br />•Outside test organizations to be utilized and their purpose, responsibilties, deliverables, contact persons, and coordination issues <br />•Relevant proprietary, classified, security, and licensing issues. <br />•Open issues <br />39.What's a 'test case'? <br />•A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly. A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results. <br />•Note that the process of developing test cases can help find problems in the requirements or design of an application, since it requires completely thinking through the operation of the application. For this reason, it's useful to prepare test cases early in the development cycle if possible. <br />40.What should be done after a bug is found? <br />The bug needs to be communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be re-tested, and determinations made regarding requirements for regression testing to check that fixes didn't create problems elsewhere. If a problem-tracking system is in place, it should encapsulate these processes. A variety of commercial problem-tracking/management software tools are available <br />The following are items to consider in the tracking process: <br />•Complete information such that developers can understand the bug, get an idea of it's severity, and reproduce it if necessary. <br />•Bug identifier(number,ID,etc.) <br />•Current bug status (e.g., 'Released for Retest', 'New', etc.) <br />•The application name or identifier and version <br />•The function, module, feature, object, screen, etc. where the bug occurred <br />•Environment specifics, system, platform, relevant hardware specifics <br />•Test case name/number/identifier <br />•One-line bug description <br />•Full bug description <br />•Description of steps needed to reproduce the bug if not covered by a test case or if the developer doesn't have easy access to the test case/test script/test tool <br />•Names and/or descriptions of file/data/messages/etc. used in test <br />•File excerpts/error messages/log file excerpts/screen shots/test tool logs that would be helpful in finding the cause of the problem <br />•Severity estimate (a 5-level range such as 1-5 or 'critical'-to-'low' is common) <br />•Was the bug reproducible? <br />•Tester name <br />•Test date <br />•Bug reporting date <br />•Name of developer/group/organization the problem is assigned to <br />•Description of problem cause <br />•Description of fix <br />•Code section/file/module/class/method that was fixed <br />•Date of fix <br />•Application version that contains the fix <br />•Tester responsible for retest <br />•Retest date <br />•Retest results <br />•Regression testing requirements <br />•Tester responsible for regression tests <br />•Regression testing results <br />A reporting or tracking process should enable notification of appropriate personnel at various stages. For instance, testers need to know when retesting is needed, developers need to know when bugs are found and how to get the needed information, and reporting/summary capabilities are needed for managers.Rajesh Babu Rajamanickamhttp://www.blogger.com/profile/16036650026261051481noreply@blogger.com0tag:blogger.com,1999:blog-8180223062941193498.post-64595430611016588992008-01-14T17:01:00.000-08:002008-01-14T17:04:50.696-08:00DEFECT TRACKING SYSTEM<span style="font-weight:bold;">DEFECT TRACKING SYSTEM</span><br />TABLE OF CONTENTS<br /><br /> <br />1.0 General 3<br /> Module Objectives 3<br /> Module Structure: 3<br />2.0 Bug Tracking System an Overview 4<br /> What is the need for Bug Tracking System? 4<br /> Features and parameters of a bug tracking system 5<br />3.0 Bug Life cycle 6<br /> BTS usage and relevance 6<br /> How do you choose a defect tracking system? 7<br />4.0 About Bugzilla 9<br /> Life cycle of a bug in Bugzilla. 10<br />5.0 About Test Tracker 12<br />6.0 Rational ClearQuest: 13<br /> Highlights 13<br />7.0 Conclusion: 14<br />8.0 Unit Summary 15<br /> Exercise 15 <br /><br />1.0 General<br />Defect/Bug tracking is one of the major activities in testing process. Bug tracking involves recording/documenting the bugs with in an organized system. Depending on the requirements and feasibility of the project, bugs can be maintained and tracked with in any understandable system.<br />Besides using spreadsheet solutions like Microsoft Excel, bugs can be tracked in organized systems like the available bug tracking tools.<br />The following module gives you an introduction to some of the bug tracking tools and processes to be followed while recording and tracking a bug.<br />Recording a bug in the context refers to documenting a bug.<br /><br />1.1 Module Objectives<br />At the end of this module you should be able to <br /> Define a bug tracking system.<br /> Identify the importance of tracking bugs.<br /> Understand the Bug life cycle <br /> Get an overview of Bugzilla, Test tracker and Clear Test<br />1.2 Module Structure:<br /><br /> S.no Topic <br />1 Bug Tracking system overview 3<br />2 Bug life Cycle 3<br />3 Introduction to Bugzilla 2<br />4 Introduction to Test Tracker 2<br />5 Introduction to clearQuest 2<br /> Total Duration 12<br />2.0 Bug Tracking System an Overview<br />Defect tracking system (also known as bug tracking tools, issue tracking tools or problem trackers) is a set of scripts, which maintain a database of problem reports. Defect Tracking Systems allow individual or group of developers to keep track of outstanding bugs in their product effectively<br />For many years, defect-tracking software has remained principally the domain of large software development houses. Even then, most shops never bothered with bug-tracking software, and instead simply relied on shared lists and email to monitor the status of defects. This procedure is error-prone and tends to cause those bugs judged least significant by developers to be dropped or ignored. <br />These days, many companies are finding that integrated defect-tracking systems reduce downtime, increase productivity, and raise customer satisfaction with their systems. Along with full disclosure, an open bug-tracker allows manufacturers to keep in touch with their clients and resellers, to communicate about problems effectively throughout the data management chain. Many corporations have also discovered that defect-tracking helps reduce costs by providing IT support accountability, telephone support knowledge bases, and a common, well-understood system for accounting for unusual system or software issues.<br />2.1 What is the need for Bug Tracking System?<br /><br />Helping you provide better customer service:<br />Because you have an instantly available database of which bugs were found in which product, you have the information to help serve your customers better when you need it.<br />Defect Management:<br />Frequently, companies find that projects get out of hand - bugs are being raised from different sources to different team members and nobody is really sure how many bugs there are, which ones have been fixed and whether the nightmare will ever end.<br />Reduce clerical overhead<br />Many companies find that a lot of their time is spent on "clerical overhead" - producing spreadsheets of bug lists, manually creating graphs and analyzing statistics. BTS will eliminate this.<br />Management-level reporting <br />Maybe you, or your boss, want to see simple summary reports showing how many show stopper bugs are still unfixed in your software, or how long it is taking on average to fix high priority bugs. Good Bug Tracking system will do that for you.<br />Analysis and categorization of bugs found<br />BTS gives you the ability to assign categories to bugs, and then analyze according to those categories. If you want to know what percentage of your bugs are caused by printer problems, and how many by user error, then you can do that with a good BTS.<br />Less work, better products and happy customers<br />Ultimately, the benefits of BTS will be: less work for you and your team because of the reduced clerical overhead, better products because of the increase in management control and, therefore, happy customers.<br />Ease of use<br />BTS is incredibly easy to use, so time will not be wasted on user training, and the application can therefore be implemented immediately.<br />Quick Search<br />Advanced search capabilities of the BTS allow end-users to track, monitor and view archived or pending issues assigned to a particular individual or project with custom-tailored filters that can quickly detail and highlight project status types and differentiation between test, production, quality control and production operating environments.<br /><br />2.2 Features and parameters of a bug tracking system<br />What are the minimal parameters of a bug tracking system? First, it should be fairly easy to submit a bug report without having to download a client side applet or piece of software. This would indicate a Web- or mail-based application, or where bugs are submitted by e-mail.<br />You should be able to track bugs and issues by the following parameters:<br /><br /> Ability to be organized by specific use case and project <br /> Whether it's Web- or e-mail-based, so there's no need to install a client-side application <br /> A key person is the owner to supervise specific reporting <br /> Ability to assign various levels of priority to an issue <br /> Ability to assign various status to an issue and bug (from Active, in Process, Fixed) <br /> Analysis query interface that allows effective queries of the database <br /> Bug and issue dependency tree <br /> E-mail notification about assigned and fixed issues <br /> Authorization/authentication <br /> Automatic user profile -- personalization <br /> Enabling a client to directly see the status and progress of their project <br /> Also essential to your project may be the ability to associate the bug occurrence by operating system, browser, type of device, protocol, and network connection.<br /> Then there is the issue of filing bugs. This depends on your needs, but I find it helpful to file them by product, in one project especially, and to file by use case activity to complete a task with graphical representation. ("Use case" is a Unified Modeling Language term used to describe an actor -- typically the user.)<br /><br />There are a few quick, out-of-the-box solutions available, however. They vary by platform and requirements for supporting software, such as scripting languages and back-end database.<br /><br />3.0 Bug Life cycle<br />The typical life cycle of a bug is as follows: <br /><br /> Bug is identified in system and created in Bugs Online <br /> Bug is assigned to a developer <br /> Developer resolves bug or clarifies with user <br /> Developer sets bug to "Resolved/Fixed" status and assigns back to original user <br /> Original user verifies that bug has been resolved and if so sets bug to "Closed" status. Only the original user who created the bug has access to "Close" the bug. Once the bug is closed it may be reopened if it is reproducible in the future builds.<br /> If the bug was not resolved to the user's satisfaction they may assign it back to the developer with a description (by adding a new detail). If this occurs then the bug returns to step 2 above. <br /><br />It is important to note that throughout the lifecycle of a bug, it should be assigned to someone. By insuring that a bug is always assigned to a user or a developer, system administrators will maintain a high level of accountability for all bugs submitted to the system.<br /><br /><br /><br /><br />3.1 BTS usage and relevance<br />An important aspect of testing is the clear and effective communication between <br />Testers, developers, and managers-a key factor being comprehensive and well-organized problem reports. Problem reports should clearly identify the program and the problem, the problem type, its severity, and steps to reproduce it. In addition, the development team should further expand such reports by grouping them into functional categories, assigning problems to specific developers, and tracking their evolution and final resolution. Lets see how BTS is useful in different aspect in software industry.<br />All programs have bugs <br />Like it or not, its true, and unless the bugs are tracked they can't be fixed, leading to failed projects. <br />Increase visibility of development process <br />Improve your customer satisfaction by increasing communication and allowing them to monitor the progress of development. <br />Traceability of bugs and their resolutions <br />Maintain audit trails to ensure all changes are accounted for. <br />Release planning <br />Manage the bugs and enhancements that are to be resolved for the next product release. <br />Resource scheduling <br />Manage the bugs that are assigned to each team member. <br />Prioritization <br />Assign priorities to the bugs to ensure critical errors are addressed before minor issues, such as the wording of an error message. <br />Improved control of a project <br />Monitor the status and progress of bugs, to follow the improvement in stability of a product or to ensure early detection of failing projects. <br />Information consolidation <br />Capture all bugs, feature requests, faqs, ideas, etc in one place to promote the sharing of information project wide. <br />Improve the quality of your software by increasing productivity <br />Notification of bug creation and status change to team members increases awareness and responsiveness. <br />Quality matters! <br />Quality products reflect well on the company leading to increased sales and/or added value.<br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br />3.2 How do you choose a defect tracking system?<br />There are n numbers of defect tracking tools available in the market. These tools are broadly divided into two major heads,<br />Web based defect tracking system<br />Client server based defect-tracking system.<br />Web based defect tracking system is widely used in the software industry and Client server based defect tracking system used by the product based companies which have its software development carried out at the same place or building.<br />The following parameters are required in any of the defect tracking system. They are:<br /><br /> Track issues, software defects, enhancements and change requests.<br /> Ready to run "as is" or customizable to meet your needs<br /> Access the system with a browser anywhere, anytime, if it is a web based BTS.<br /> Very responsive; does not feel like a typical slow web application.<br /> View issues across multiple projects and should allow you to create as many projects as you need.<br /> Quick Stats shows instant project statistics from both personal and global perspectives.<br /> Powerful, automated, per-project email notifications - keeps everyone informed.<br /> Watch List - should allow you to group important issues in your own private list.<br /> Add your own custom fields to the many already provided.<br /> Upload attachments to issues without size restriction.<br /> Export your data to CSV files (Excel compatible) at anytime.<br /> Custom report builder - design reports and share them with others or keep private.<br /> Comments and history record an unlimited dialogue & audit trail of an issue's life.<br /> Easy to analyze projects using statistics and reports; both standard and custom reports.<br /> Add both public and private notes or comments to any issue.<br /> Increased team effectiveness and return on investment.<br /> Reduced clerical overhead and issues slipping through the cracks.<br /> Management level reporting and statistics overview.<br /> Low, predictable cost of ownership.<br /> Less of a gamble; no long-term contracts; stop using the system at any time.<br /> Fair and easy pricing model.<br /> Role-based security that provides easier control of user access.<br /> Powerful searching capabilities - basic, advanced and comments.<br /> Stable and reliable; near zero maintainability.<br /> No support costs.<br /> Cost effective with no up-front investment.<br /> Easy to share information with your partners and customers anywhere, anytime.<br /> Intuitive, even for non-techies; online, context sensitive help.<br /> Real-time access to the most up to date information.<br /> System that continually becomes more useable and versatile.<br /> Allows programmers and testers to focus on their jobs, not the tool.<br /> No training required.<br /> Enables effective use of a distributed workforce.<br /> Control which users have access to certain projects, issues, and reports.<br /> Enhance team productivity and save money.<br /><br /><br /><br /><br /><br />4.0 About Bugzilla<br />Bugzilla is open source software. Its source code has been released under the Mozilla Public License.<br />Bugzilla is one example of a class of programs called "Defect Tracking Systems", which allows individual or group of developers to keep track of outstanding bugs in their product effectively. Bugzilla was originally written by Terry Weissman in a programming language called "TCL", to replace a crappy bug-tracking database used internally for Netscape Communications. Terry later ported Bugzilla to Perl from TCL, and in Perl it remains to this day. Most commercial defect-tracking software vendors at the time charged enormous licensing fees, and Bugzilla quickly became a favorite of the open-source crowd (with its genesis in the open-source browser project, Mozilla). It is now the de-facto standard defect-tracking system against which all others are measured.<br />Bugzilla has matured immensely, and now boasts many advanced features. These include:<br />o integrated, product-based granular security schema <br />o inter-bug dependencies and dependency graphing <br />o Advanced reporting capabilities <br />o A robust, stable RDBMS back-end <br />o Extensive configurability <br />o A very well understood and well-thought-out natural bug resolution protocol <br />o Email, XML, console, and HTTP APIs <br />o Available integration with automated software configuration management systems, including Perforce and CVS (through the Bugzilla email interface and checking/checkout scripts) <br />o Too many more features to list <br />Despite its current robustness and popularity, Bugzilla faces some near-term challenges, such as reliance on a single database, a lack of abstraction of the user interface and program logic, verbose email bug notifications, a powerful but daunting query interface, little reporting configurability, problems with extremely large queries, some unsupportable bug resolution options, little internationalization (although non-US character sets are accepted for comments), and dependence on some nonstandard libraries.<br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br />4.1 Life cycle of a bug in Bugzilla.<br />What happens to a bug when it is first reported depends on who reported it. New Bugzilla accounts by default create bugs, which are UNCONFIRMED - this means that a QA (Quality Assurance) person needs to look at it and confirm it exists before it gets turned into a NEW bug.<br /><br />When a bug becomes NEW, the developer will probably look at the bug and either accept it or give it to someone else. Whenever a bug is reassigned or has its component changed, its status is set back to NEW. The NEW status means that the bug is newly added to a particular developer's plate, not that the bug is newly reported. <br /><br />Those to whom additional permissions have been given have the ability to change all the fields of a bug (by default, you can only change a few). Whenever you change a field in a bug it's a good idea to add additional comments to explain what you are doing and why. Make a note whenever you do things like change the component, reassign the bug, create an attachment, add a dependency or add someone to the CC list. Whenever someone makes a change to the bug or adds a comment, the owner, reporter, the CC list and those who voted for the bug are sent an email (unless they have switched it off) showing the changes to the bug report. <br />When a bug is fixed it's marked RESOLVED and given one of the following resolutions. <br />FIXED <br />A fix for this bug has been checked into the tree and tested by the person marking it FIXED. <br />INVALID <br />The problem described is not a bug, or not a bug in Mozilla. <br />WONTFIX <br />The problem described is a bug, which will never be fixed, or a problem report, which is a "feature", not a bug. <br />LATER and REMIND <br />These are both deprecated. Please do not use them. <br />DUPLICATE <br />The problem is a duplicate of an existing bug. Marking a bug duplicate requires the bug number of the duplicating bug and will add a comment with the bug number into the description field of the bug it is a duplicate of. <br />WORKSFORME <br />All attempts at reproducing this bug in the current build were futile. If more information appears later, please re-open the bug, for now, file it. <br />MOVED <br />The bug was specific to a particular Mozilla-based distribution and didn't affect mozilla.org code. The bug was moved to the bug database of the distributor of the affected Mozilla derivative. <br /><br />QA looks at resolved bugs and ensures the appropriate action has been taken. If they agree, the bug is marked VERIFIED. Bugs remain in this state until the product ships, at which time the bug is marked CLOSED. Bugs may come back to life by becoming REOPENED. <br />Be careful when changing the status of someone else's bugs. Instead of making the change yourself, it's usually best to make a note of your proposed change as a comment and to let the bug's owner review this and make the change themselves. For instance, if you think one bug is a duplicate of another, make a note of it in the Additional Comments section. <br />If you make a lot of useful comments to someone's bugs they may come to trust your judgment and ask you to go ahead and make the changes yourself, but unless they do, it's best to be cautious and only make comments.<br /><br />To practice on bugzilla, use http://bugzilla.applabs.net/dummy/index.cgi<br />User Id: dummy@applabs.net<br />Password: dummy<br /><br /><br />5.0 About Test Tracker<br />Seapine’s TestTrack Pro delivers time-saving features that keep all team members informed and on schedule. Its advanced configurability and scalability make it the most powerful solution at the best value.<br />Highlights of TestTrack:<br />Work as a Team<br />Bug tracking is a team activity involving engineers, testers, managers, and tech writers — even members of the sales and marketing teams can get involved. TestTrack Pro makes it easy to coordinate activities between team members, but most importantly, TestTrack Pro makes it easy to participate. <br /><br />Manage Your Development Process<br />TestTrack Pro's fully customizable workflows lets you tailor it to drive your development process. With definable states, events, and state transition rules, you can model your most complex workflow processes. And, TestTrack Pro will even diagram the workflow for you!<br /><br />Improve Product Quality with Source Code Integration<br />TestTrack Pro integrates with third-party source code control applications, such as Microsoft Visual SourceSafe, Merant PVCS, Perforce, and others. This integration enhances your ability to associate specific defects logged in TestTrack Pro with your source code, thus enhancing your product's quality.<br /><br />Web Access - Work from Anywhere<br />whether you are at your desk, on the road, or working from home, TestTrack Pro has the tools you need to access your bug database. All of TestTrack Pro's features can be accessed though a Web browser on virtually any operating system or through our Windows client. And, TestTrack Pro's client/server architecture allows you to place your Windows client at any location on the Internet and still access your bugs (provided your firewall lets you in!) <br /><br />Stay up to Date on Your Projects<br />TestTrack Pro lets any authorized user look up the current state of any defect at any time. And, TestTrack Pro's comprehensive email notification support is second to none. TestTrack Pro notifies team members by e-mail when bugs are assigned to them or new bugs are added — even when a specific bug changes. Unlike competing products, TestTrack Pro includes SMTP- and MAPI-based email notification support at no additional cost.<br /><br />Manage Your Projects Effectively<br />TestTrack Pro puts quality control statistics at your fingertips. Do you want to know who reported the most bugs, how many are still open, or how much time a user spent fixing bugs? This information and more is just a click or two away. How about a bug’s history — who found, fixed, and verified it — all of the details are available.<br /><br /><br /><br />6.0 Rational Clear Quest:<br /> Features and benefits <br />IBM Rational® Clear Quest® is a powerful and highly flexible defect and change tracking system that captures and manages all types of change requests throughout the development lifecycle, helping organizations quickly deliver higher quality software.<br />Whether you're working on Windows, UNIX or the Web, the fully customizable interface and workflow engine adapt to any development process. With support for industry standard databases, Rational Clear Quest scales to support projects of any size and integration with other development solutions ensures that your entire team is tied into the defect and change tracking process<br />Rational Clear Quest is also deeply integrated with leading IDEs including Web Sphere Studio, Eclipse and .NET, providing developers with instant access to change information from within their preferred development environment.<br /><br />6.1 Highlights <br />1. Enables easy customization of defect and change request fields, processes, user interface, queries, charts and reports. <br />2. Complete out-of-the-box solution includes automatic e-mail notification and submission. <br />3. Scales easily to support projects regardless of team size, location or platform. <br />4. "Design once, deploy anywhere" capabilities automatically propagate changes to client interfaces on all platforms, including Windows, UNIX and the Web. <br />5. Offers geographically distributed teams instant access to defect and change data by using a robust and error-free replicating and synchronizing mechanism quality software. <br />6. Works with Rational ClearCase to provide a complete SCM solution <br />7. Included in IBM Rational Suite® for powerful defect and change tracking across the lifecycle. <br />8. Provides a core component of Rational's Unified Change Management solution <br />9. Options for Rational ClearQuest MultiSite to support change management across geographically distributed organizations.<br />7.0 Conclusion:<br />Recent years have brought a major breakthrough in the field of application testing. Growing complexity of today's applications, combined with increased competitive pressures and skyrocketing costs of application failure and downtime, have catapulted the need for testing to new highs.<br />While the pressures to deliver high-quality applications continue to mount, shrinking development and deployment schedules, geographically distributed organizations, limited resources and high turnover rates for skilled employees make application testing the ultimate challenge. Faced with the reality of having to do more with less, juggle multiple projects and manage diverse and distributed project teams, many organizations are adopting test-management methodologies and turning to automated test-management tools to help centralize, organize, prioritize and document their testing efforts.<br />An Efficient BUG TRACKING SYSTEM is the most important component in the application testing process, and keeping in view the factors above it is important as it ensures centralization, organization, prioritization and documentation of the testing efforts.<br /><br />8.0 Unit Summary<br />In this session we have learnt<br />1. Defect tracking systems and their purpose.<br />2. Bug lifecycle<br />3. Defect tracking using Bugzilla<br />4. Features of few other defects tracking system.<br /><br />8.1 Exercise<br /><br />5. Explain the terms ”Bug” and a “Bug Tracking System”?<br />6. Explain the importance of bug tracking?<br />7. Explain the “Bug Life Cycle”?<br />8. List some of the bug tracking tools and their features?<br />9. List the steps of posting a bug in Bugzilla?Rajesh Babu Rajamanickamhttp://www.blogger.com/profile/16036650026261051481noreply@blogger.com2tag:blogger.com,1999:blog-8180223062941193498.post-89356071478641098172008-01-14T16:59:00.000-08:002008-01-14T17:00:15.148-08:00Software Test Execution<span style="font-weight:bold;">Software Test Execution</span><br /><br />Table of Contents<br /><br /> <br />1.0 General 4<br /> Module Objectives 4<br /> Module Structure 4<br />2.0 Software Test Execution 5<br /> Execute tests and Record Results 5<br /> Report Test Results 14<br /> Testing Software Installation 18<br /> Acceptance Test 21<br /> Test Software Changes 23<br /> Testing in a Multiplatform Environment 27<br /> Testing Specialized Systems and Applications 30<br /> Testing Web-based Systems 30<br /> Testing Off-the-Shelf Software 37<br /> Testing Client / Server Systems 45<br /> Evaluate Test Effectiveness 47<br /> Building Test Documentation 52<br />3.0 Unit Summary 55<br /> Exercise 55 <br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /> <br />1.0 General<br />Test execution forms the core-component of any testing project. Extreme care needs to be taken in performing test execution activity as per the test strategy and the test planning done in the previous phases. Test execution involves the work of actually executing what has been planned.<br />1.1 Module Objectives<br />At the end of this session you should be able to:<br /> Understand different approaches to test execution.<br /> Perform Test execution, record and report test results..<br /> Create test documentation.<br />.<br />1.2 Module Structure<br />S.no Topic <br />1 Test execution & report test results 2<br />2 Approaches to execution 6<br /> Total Duration 8<br /><br /><br />2.0 Software Test Execution<br />2.1 Execute tests and Record Results<br /><br />Concerns:<br />Three major concerns testers have on entering the test execution step:<br /><br />1. Software not in a testable mode<br />2. Inadequate time/resources<br />3. Significant problems will not be uncovered during testing<br /><br />Tasks:<br /><br />The execution involves performing the following three tasks:<br />Build Test Data<br /><br />Experience shows that it is uneconomical to test all conditions in an application system. Experience further shows that most testing exercises fewer than one-half of the computer instructions. Therefore, optimizing testing through selecting the most important test transactions is the key aspect of the test data test tool.<br /><br />Test File Design<br /><br />To be effective, a test file should use transactions having a wide range of valid and invalid input data – valid data for testing normal processing operations and invalid data for testing programmed controls.<br /><br />General types of conditions that should be tested are as follows.<br /><br />Tests of normally occurring transactions<br />To test a computer system’s ability to accurately process valid data, a test file should include transactions that normally occur.<br /><br />Tests using invalid data<br />Testing for the existence or effectiveness of programmed controls requires using invalid data.<br /><br /><br /><br />Tests to violate established edit checks<br />From system documentation, the auditor should be able to determine what edit routines are included in the computer programs to be tested. He or she should then create test transactions to violate these edits to see whether they, in fact, exist.<br /><br />Entering Test Data<br />After the types of test transactions have been determined, the test data should be put into correct entry form. If the test team wishes to test controls over both input and computer processing, they should feed the data into the system on basic-source documents for the organization to convert into machine – readable form.<br /><br />Analyzing Processing Results<br />Before processing test data through the computer, the test team must predetermine the correct result for each test transaction for comparison with actual results. <br /><br />Applying Test Files against Programs that Update Master Record<br />There are two basic approaches to test programs for updating master records. In one approach, copies of actual master records and/or simulated master records are used to set up a separate master file for the test. In the second approach, special audit records, kept in the organization’s current master file, are used.<br /><br />Test File Process<br />The recommended nine-step process for the creation and use of test data is as follows:<br /><br /> Identify test resources<br /> Identify test conditions<br /> Rank test conditions<br /> Select conditions for testing<br /> Determine correct results of processing<br /> Create test transactions<br /> Document test conditions<br /> Conduct test<br /> Verify and Correct<br /><br /><br />Volume Test Tool<br />Volume testing is a tool that supplements test data. The objective is to verify that the system can perform properly when internal program or system limitations have been exceeded. This may require that large volumes of transactions be entered during testing.<br /><br />The types of internal limitations that can be evaluated with volume testing include:<br /><br /> Internal accumulation of information, such as tables<br /> Number of line items in an event, such as the number of items that can be included within an order<br /> Size of accumulation fields<br /> Data-related limitations, such as leap year, decade change, switching calendar years, and so on<br /> Field size limitations, such as number of characters allocated for people’s names<br /> Number of accounting entities, such as number of business locations, state/country in which business is performed, and so on.<br /><br />The concept of volume testing is as old as the processing of data in information services. What is necessary to make the concept work is a systematic method of identifying limitations. The recommended steps for determining program/system limitations follow.<br /><br /> Identify input data used by the program<br /> Identify data created by the program<br /> Challenge each data element for potential limitations<br /> Document limitations<br /> Perform volume testing<br /><br /><br />Creating Test Scripts<br />The following five tasks are needed to develop, use, and maintain test scripts.<br /><br />Determine Testing Levels<br />There are five levels of testing for scripts, as follows.<br /><br />Unit Scripting. Develop a script to test a specific unit/module.<br />Pseudo concurrency scripting. Develop scripts to test when there are two or more users accessing the same file at the same time.<br />Integration scripting. Determine that various modules can be properly linked.<br />Regression scripting. Determine that the unchanged portions of systems remain unchanged when the system is changed. (Note: This is usually performed with the information captured on capture/playback software systems.)<br />Stress/performance scripting. Determine whether the system will perform correctly when it is stressed to its capacity. This validates the performance of the software when stressed by large numbers of transactions. The testers need to determine which (or all) of these five levels of scripting to include in the script.<br /><br />Develop Script<br />This task is also normally done using the capture/playback tool. The script is a complete series of related terminal actions. The development of a script involves a number of considerations, as follows:<br /> Script components<br /> Terminal input<br /> Programs to be tested<br /> Files involved<br /> On-line operating environment<br /> Terminal output<br /> Manual entry of script transactions<br /> Date setup<br /> Secured initialization<br /> File restores<br /> Password entry<br /> Update<br /> Automated entry of script transactions<br /> Edits of transactions<br /> Navigation of transactions through the system<br /> Inquiry during processing<br /> External considerations<br /> Program libraries<br /> File states/contents<br /> Screen initialization<br /> Operating environment<br /> Security considerations<br /> Complete scripts<br /> Start and stop considerations<br /> Start; usually begins with a clear screen<br /> Start; begins with a transaction code<br /> Scripts; end with a clear screen<br /> Script contents<br /> Sign-on<br /> Setup<br /> Menu navigation<br /> Function<br /> Exit<br /> Sign-off<br /> Clear screen<br /> Security considerations<br /> Changing passwords<br /> User identification/security rules<br /> Reprompting<br /> Single-terminal user identifications<br /> Sources of scripting transactions<br /> Terminal entry of scripts<br /> Operations initialization of files<br /> Application program interface (api) communications<br /> Special considerations<br /> Single versus multiple terminals<br /> Date and time dependencies<br /> Timing dependencies<br /> Inquiry versus update<br /> Unit versus regression test<br /> Organization of scripts (recommend by purpose)<br /> Unit test organization<br />o Single functions (transactions)<br />o Single terminal<br />o Separate inquiry from update<br />o Self-maintaining<br /> Pseudo concurrent test<br />o Single functions (transactions)<br />o Multiple terminals<br />o Three steps: setup (manual/script), test (script), and reset (manual/script)<br /> Integration test (string testing)<br />o Multiple functions (transactions)<br />o Single terminal<br />o Self-maintaining<br /> Regression test<br />o Multiple functions (transactions)<br />o Multiple terminals<br />o Three steps: setup (external), test (script), and reset (external)<br /> Stress/performance test<br />o Multiple functions (transactions)<br />o Multiple terminals (2 X rate)<br />o Iterative/vary arrival rate; three steps: setup (external), test (script), and collect performance data.<br /><br />Execute Script<br />The script can be executed manually or by using the capture/playback tools.<br />Caution: Be reluctant to use scripting extensively unless a software tool drives the script. Some of the considerations to incorporate into script execution are:<br /><br /> Environmental setup<br /> Program libraries<br /> File states/contents<br /> Date and time<br /> Security<br /> Multiple terminal arrival modes<br /> Serial (cross-terminal) dependencies<br /> Pseudo concurrent<br /> Processing options<br /> Stall detection<br /> Synchronization<br /> Rate<br /> Arrival rate<br /> Think time<br /><br /> <br /><br /><br /><br /><br /><br /> Analyze Results<br />After executing the test script, the results must be analyzed. However, much of this should have been done during the execution of the script, using the operator instructions provided. Please note that if a capture/playback software tool is used, analysis will be more extensive after execution. The result analysis should include the following:<br /><br />• System components<br />• Terminal outputs (screens)<br />• File content at conclusion of testing<br />• Environment activities, such as:<br />o Status of logs<br />o Performance data (stress results)<br />• On-screen outputs<br />• Order of outputs processing<br />• Compliance of screens to specifications<br />• Ability to process actions<br />• Ability to browse through data<br /><br />Maintain Scripts<br />Once developed, scripts need to be maintained so that they can be used throughout development and maintenance. The areas to incorporate into the script maintenance procedure are:<br /><br />• Programs<br />• Files<br />• Screens<br />o Insert (transactions)<br />o Delete<br />o Arrange<br />• Field<br />o Changed (length, content)<br />o New<br />o Moved<br />• Expand test cases<br /><br /><br />Several characteristics of scripting are different from batch test data development. Theses differences are:<br /><br /> Data entry procedures required<br /> Use of software packages<br /> Sequencing of events<br /> Stop procedures<br /><br /><br /> <br />Execute Tests<br /><br />There are many methods of testing an application system. The test team is concerned that all of these forms of testing occur so that the organization has the highest probability of successes when installing a new application system.<br />The test team should address the following types of tests during the test phase.<br /><br /> Manual, Regression, and Functional Testing (Reliablity)<br /> Compliance Testing (Authorization)<br /> Functional Testing (File Integrity)<br /> Functional Testing (Audit Trial)<br /> Recovery Testing (Continuity of Testing)<br /> Stress Testing (Service Level)<br /> Compliance Testing (Security)<br /> Testing Complies with Methodology<br /> Functional Testing (Correctness)<br /> Manual Support Testing (Ease of use)<br /> Inspections (Maintainability)<br /> Disaster Testing (Portability)<br /> Functional and Regression Testing (Coupling)<br /> Compliance Testing (Performance)<br /> Operations Testing (Ease of Operations)<br /><br /><br />Record Test Result<br />A test problem is a condition that exists within the software system that needs to be addressed. Carefully and completely documenting a test problem is the first step in correcting the problem.<br /><br />The following four attributes should be developed for all test problems:<br /><br />1. Statement of condition. Tells what is.<br />2. Criteria. Tells what should be.<br />These two attributes are the basis for a finding. If a comparison between the two gives little or no practical consequence, no finding exists.<br /><br />3. Effect. Tells why the difference between what is and what should be is significant.<br />4. Cause. Tells the reasons for the deviation. Identification of the cause is necessary as a basis for corrective action.<br /><br />A well-developed problem statement will include each of these attributes. When one or more of these attributes is missing, questions almost always arise, such as:<br /><br /> Condition. What is the problem?<br /> Criteria. Why is the current state inadequate?<br /> Effect. How significant is it?<br /> Cause. What could have caused the problem?<br /><br />Documenting a statement of a user problem involves three subtasks, which are explained in the following paragraphs.<br /><br />Document Deviation<br />The documenting of deviation is describing the conditions, as they currently exist, and the criteria, which represents what the user desires. The actual deviation will be the difference or gap between “what is” and “what is desired”.<br /><br />The statement of condition should document as many of the following attributes as appropriate for the problem.<br /><br />Activities involved. The specific business or administrated activities that are being performed.<br />Procedures used to perform work. The specific step-by-step activities that are utilized in producing output from the identified activities.<br />Outputs/deliverables. The products that are produced from the activity.<br />Inputs. The triggers, events, or documents that cause this activity to be executed.<br />Users/customers served. The organization, individuals, or class of users/customers serviced by this activity.<br />Deficiencies noted. The status of the results of executing this activity and any appropriate interpretation of those facts.<br /><br />Document Effect – Efficiency, economy, and effectiveness are useful measures of effect and frequently can be stated in quantitative terms such as dollars, time, units of production, number of procedures and processes, or transactions.<br /><br />Document Cause – The cause is the underlying reason for the condition. <br />Most findings involve one or more of the following causes:<br /><br />• Nonconformity with standards, procedures, or guidelines<br />• Nonconformity with published instructions, directives, policies, or procedures from a higher authority<br />• Nonconformity with business practices generally accepted as sound<br />• Employment of inefficient or uneconomical practices.<br /><br />The determination of the cause of a condition usually requires the scientific approach, which encompasses the following steps:<br /><br />• Define the problem (the condition that results in the finding).<br />• Identify the flow of work and/or information leading to the condition.<br />• Identify the procedures used in producing the condition.<br />• Identify the people involved.<br />• Recreate the circumstances to identify the cause of a condition.<br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br />2.2 Report Test Results<br /><br />Concerns<br /><br />The individuals responsible for assuring that software projects are accurate, complete, and meet users’ true needs have these concerns regarding the status of project.<br /><br />• Test results will not be available when needed.<br />• Test information is inadequate.<br />• Test status is not delivered to the right people.<br /><br />Input:<br /><br />There are three types of input needed to answer management’s questions about the status of the software system. They are as follows:<br /><br />• Test Plan(s) and Project Plan(s): Testers need both test plan and the project plan, both of which should be viewed as contracts. The project plan is the project’s contract with management for work to be performed; and the test plan is a contract indicating what the testers will do to determine whether the software is complete and correct. It is against these two plans that testers will report status.<br />• Expected Processing Results: Testers report status of actual results against expected results. To make these reports, the testers need to know what results are expected. For software systems the expected results are the business results.<br />• Data Collected During Testing: Four categories of data will be collected during testing. These are as follows:<br />o Test Results Data: This data will include, but not be limited to:<br /> Test factors<br /> Business Objectives<br /> Interface Objectives<br /> Functions/sub functions<br /> Units<br /> Platform<br />o Test Transactions, Test Suites, and Test Events: These are the test products produced by the test team to perform testing. They include, but are not limited to:<br /> Test transactions/events<br /> Inspections<br /> Reviews<br />o Defects: This category includes a description of the individual defects uncovered during testing. This description includes, but is not limited to:<br /> Data the defect uncovered<br /> Name of the defect<br /> Location of the defect<br /> Severity of the defect<br /> Type of defect<br /> How the defect was uncovered (i.e., test data/test script)<br />o Efficiency: Two types of efficiency can be evaluated during testing: software system and test.<br />o Storing Data Collected During Testing: It is recommended that a database be established in which to store the results collected during testing.<br />The most common test report is a simple spreadsheet, which indicates the project component for which status is requested, the test that will be performed to determine the status of that component, and the results of testing at any point in time.<br /><br />The process of reporting is divided into three tasks as follows.<br /><br />Tasks<br /><br />Task 1: Report Software Status<br /> There are two levels of project status reports:<br /><br />1. Summary Status report. Provides a general view of all project components.<br />2. Project Status report. Shows detailed information about a specific project component, allowing the reader to see up-to-date information about schedules, budgets, and project resources.<br />Both reports are designed to present information clearly and quickly.<br /><br />Prior to effectively implementing a project reporting process, two inputs must be in place.<br /><br />1. Measurement units: Information services must have established reliable units of measure that can be validated.<br />2. Process requirements: Process requirements for a project reporting system must include functional, quality, and constraint attributes.<br />The six subtasks for this task are described in the following subsections.<br /><br />1. Establish a Measurement Team<br />2. Inventory Existing Project Measures<br />3. Develop a Consistent set of Project Metrics<br />4. Define Process Requirements<br />5. Develop and Implement the process<br />6. Monitor the Process<br /><br /><br /><br /><br /><br />Summary status report:<br /><br />The summary status report provides general information about all projects.<br />This is divided into four sections.<br />• Report date information<br />• Project Information<br />• Time line information<br />• Legend Information<br /><br />Project status report:<br /><br />The Project status report provides information related to a specific project component. This is divided into six sections.<br /><br />• Vital Project Information<br />• General Project Information<br />• Project Activities Information<br />• Essential Elements Information<br />• Legend Information<br />• Project highlights information<br /><br /><br /><br /><br /><br /><br />Task 2: Report Interim Test Results<br /><br /> The test process should produce a continuous series of reports that describe the status of testing. The test reports are for use by the testers, the test manager, and the software development team.<br /><br /> Nine interim reports are proposed here. Testers can use all nine or select specific ones to meet individual test needs.<br /><br />1. Function/Test Matrix<br />2. Functional Testing Status Report<br />3. Functions Working Time Line<br />4. Expected versus Actual Defects Uncovered Time Line<br />5. Defects Uncovered versus Corrected Gap Time Line<br />6. Average Age of Uncorrected Defects by Type<br />7. Defect Distribution Report<br />8. Relative Defect Distribution Report<br />9. Testing Action Report<br /><br /><br />Task 3: Report Final Test Results<br /><br /> A final test report should be prepared at the conclusion of each test activity. This might include:<br /><br />• Individual Project test report (e.g., a single software system)<br />• Integration Test report <br />• System Test report<br />• Acceptance Test report<br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /> <br />2.3 Testing Software Installation<br /><br />The process of installation testing is attempting to validate that:<br /><br />• Proper programs are placed into the production status.<br />• Needed data is properly prepared and available.<br />• Operating and user instructions are prepared and used.<br /><br />Input: The installation phase is the process of getting a new system operational. The process may involve any or all of the following areas:<br /><br />• Changing old data to a new format.<br />• Creating new data<br />• Installing new and/or change programs<br />• Updating computer instructions<br />• Installing new user instructions.<br /><br />Much of the test process will be evaluating and working with installation phase deliverables. The more common deliverables produced during the installation phase include:<br /><br />• Installation plan<br />• Installation flowchart<br />• Installation program listings and documentations (assuming special installation programs are required).<br />• Test results from testing special installation programs<br />• Documents requesting movement of programs into the production library and removal of current programs from that library.<br />• New operator instructions<br />• New user instructions procedures<br />• Results of installation process<br /><br /><br />The process of installation is divided into tasks.<br /><br />Task 1a: Test Installation of New Software<br /><br />The following are the concerns of installation testing.<br /><br />1. Accuracy and completeness of installation verified (reliability). <br />2. Data changes during installation prohibited (authorization)<br />3. Integrity of production files verified.<br />4. Installation audit trail recorded.<br />5. Integrity of previous system assured (continuity of processing).<br />6. Fail-safe installation plan implemented (service level).<br />7. Access controlled during installation (security).<br />8. Installation compiles with methodology.<br />9. Proper programs and dates placed into production.<br />10. Usability instructions disseminated.<br />11. Documentation complete (maintainability).<br />12. Documentation complete (portability).<br />13. Interface coordinated (coupling).<br />14. Integration performance monitored.<br />15. Operating procedures implemented.<br /><br /><br /><br />Task 1b: Test Changed Version (of Software)<br /><br />The specific objectives of installing the change are as follows:<br /><br />• Put changed application systems into production. <br />• Assess the efficiency of changes.<br />• Monitor the correctness of the change.<br />• Keep systems library up to date.<br /><br />Most common concerns during the installation of the change include the following:<br /><br />• Will the change be installed on time?<br />• Is backup data compatible with the changed system?<br />• Are recovery procedures compatible with the changed system?<br />• Is the source/object library cluttered with obsolete program versions?<br />• Will errors in the change be detected?<br />• Will errors in the change be corrected?<br /><br />Testing the installation of changes is divided into three tasks.<br /><br />• Test the restart/recovery plan<br />• Verify the Correct change has been entered into production<br />• Verify unneeded versions have been deleted<br /><br /><br /><br /><br /><br /><br />Task 2: Monitor production<br /><br />The following groups may monitor the output of a new program version:<br /><br />• Application system control group.<br /><br />• User personnel.<br />• Software maintenance personnel.<br />• Computer operations personnel.<br /><br />Regardless of who monitors the output, the software maintenance analyst and user personnel should provide clues about what to look for. User and software maintenance personnel must attempt to identify the specific areas where they believe problems might occur.<br /><br />The types of clues that could be provided to monitoring personnel include:<br /><br />• Transactions to investigate.<br />• Customers.<br />• Reports.<br />• Tape files.<br />• Performance.<br /><br /><br />Task 3: Document problems<br /><br />Individuals detecting problems when they monitor changes in application systems should formally document them. The formal documentation process can be made even more effective if the forms are controlled through a numbering sequence. <br />The individual monitoring the process should be asked both to document the problem and to assess the risk associated with that problem.<br /><br /><br /><br /><br /><br /><br /><br /><br /><br />2.4 Acceptance Test<br /><br />1. Define the Acceptance Criteria<br />In preparation for developing the acceptance criteria, the user should:<br /><br />• Acquire full knowledge of the application for which the system is intended.<br />• Become fully acquainted with the application as it is currently implemented by the user’s organization.<br />• Understand the risks and benefits of the development methodology that is to be used in correcting the software system.<br />• Fully understand the consequences of adding new functions to enhance the system.<br /><br />Acceptance requirements that a system must meet can be divided into these four categories:<br /><br />• Functionality requirements, which relate to the business rules that the system must execute.<br />• Performance requirements, which relate to operational requirements such as time or resource constraints.<br />• Interface quality requirements, which relate to a connection to another component of processing (e.g., human/machine, machine/module).<br />• Overall software quality requirements are those that specify limits for factors or attributes such as reliability, testability, correctness, and usability.<br /><br /><br />2. Develop an Acceptance Plan<br /><br />The first step to achieve software acceptance is the simultaneous development of a software acceptance plan, general project plans, and software requirements to ensure that user needs are represented correctly and completely. This simultaneous development will provide an overview of the acceptance activities, to ensure that resources for them are included in the project plans.<br /> After the initial software acceptance plan has been prepared, reviewed, and approved, the acceptance manager is responsible for implementing the plan and for assuring that the plan’s objectives are met.<br /><br />3. Execute the Acceptance Plan (Conduct Acceptance Tests and Reviews)<br /><br />The objective of this step is to determine whether the acceptance criteria have been met in a delivered product. This can be accomplished through reviews, which involve looking at interim products and partially developed deliverables at various points throughout the developmental process.<br /><br />a. Developing Test Cases (Use Cases) Based on How Software Will Be Used<br /><br />Incomplete, incorrect, and missing test cases can cause incomplete and erroneous test results. Flawed test results causes rework, at minimum, and at worst, a flawed system to be developed. There is a need to ensure that all required test cases are identified so that all system functionality requirements are tested.<br /><br />A use case is a description of how a user (or another system) uses the system being designed to perform a given task. A system is described by the sum of its use cases. Each instance or scenario of a use case will correspond to one test case.<br /><br /><br /> The following are the subtasks followed.<br />i. Build System Boundary Diagram<br />ii. Define Use Cases<br />iii. Develop Test Cases<br /><br />4. Reach an Acceptance Decision<br />Typical acceptance decisions include<br /><br />1. Required changes are accepted before progressing to the next activity.<br />2. Some changes must be made and accepted before further development of that section of the product; other changes may be made and accepted at the next major review.<br />3. Progress may continue may continue and changes may be accepted at the next review.<br />4. No changes are required and progress may continue.<br /><br /><br /><br /><br /><br /> <br />2.5 Test Software Changes<br /><br />Information Technology management should be concerned about the implementation of the testing and training objectives.<br /><br />The following five tasks should be performed to effectively test a changed version of software.<br /><br />Task 1: Develop/Update the Test Plan<br /><br />The test plan for software maintenance is a shorter, more directed version of a test plan used for a new application system. While new application testing can take many weeks or months, software maintenance testing often must be done within a single day or a few hours. Because of time constraints, many of the steps that might be performed individually in a new system are combined or condensed into a short time span. This increases the need for planning so that all aspects of the test can be executed within the allotted time.<br /><br />• Elements to be tested (types of testing) are:<br />• Changed transactions<br />• Changed programs<br />• Operating procedures<br />• Control group procedures<br />• User procedures<br />• Intersystem connections<br />• Job control language<br />• Interface to systems software<br />• Execution of interface to software systems<br />• Security<br />• Backup/recovery procedures<br /><br />Task 2: Develop/Update the Test Data<br /><br />Data must be prepared for testing all the areas changed during a software maintenance process. For many applications, the existing test data will be sufficient to test the new change. However, in many situations new test data will need to be prepared.<br /><br />It is important to test both what should be accomplished, as well as what can go wrong. Most tests do a good job of verifying that the specifications have been implemented properly. Where testing frequently is inadequate is in verifying the unanticipated conditions. Included in this category are:<br /><br />• Transactions with erroneous data<br />• Unauthorized transactions<br />• Transactions entered too early<br />• Transactions entered too late<br />• Transactions that do not correspond with master data contained in the application<br />• Grossly erroneous transactions, such as transactions that do not belong to the application being tested<br />• Transactions with larger values in the fields than anticipated<br />There are three methods that can be used to develop/update test data as follows:<br /><br />Method 1: Update existing test data<br /><br />If test files have been created for a previous version they can be used for testing a change. However the test data will need to be updated to reflect the changes to the software.<br /><br />Method 2: Create new test data<br /><br />The creation of new test data for maintenance follows the same methods as creating test data for a new software system.<br /><br />Method 3: Use production data for testing<br /><br />Tests are performed using some or all of the production data for test purposes (date modified, of course), particularly when there are no function changes. Using production data for test purposes may result in the following impediments to effective testing:<br /><br />• Missing test transactions<br />• Multiple tests of the same transaction<br />• Unknown test results<br />• Lack of ownership<br /><br />Production Data Definition: The following categories of production data can be used in testing:<br /><br />• Transaction files<br />• Business master files<br />• Master files of business data<br />• Error files<br />• Operations, communications, database, or accounting logs<br />• Manual logs<br /><br />This production data can be used for test purposes. In some instances, it yields test transactions (e.g., a transaction file); in other cases, it provides information about performance results(e.g., an SMF log or job accounting log). To use production data as test data, testers first must determine the type of production data to use (e.g., a business transaction file). Then they can perform one or more of the following five steps to convert that production file to a test file.<br /><br />• Select the First Bach of Records<br />• Protect Production Files from Modification<br />• Select a Random Sample of Transactions<br />• Browse through the Production File<br />• Do Parallel Testing<br /><br /> <br />Task 3: Test the control change process<br /><br />The following three subtasks are commonly used to control and record changes. If the staff performing the corrections do not have such a process, the testers can give them these subtasks and then request the work papers when complete. Testers should verify completeness using these three subtasks as a guide.<br /><br />1. Identify and Control Change<br /> An important aspect of changing a system is identifying which parts of the system will be impacted by that change. The impact may be in any part of the application system, both manual and computerized, as well as in the supporting software system. Regardless of whether impacted areas will require changes, at a minimum there should be an investigation into the extent of the impact.<br /><br /> 2. Document Change Needed on Each Data Element<br /> Whereas changes in processing normally impact only a single program or a small number of interrelated programs, changes to data may impact many applications. Thus, changes that impact data may have a more significant effect on the organization than those that impact processing.<br /><br /> 3. Document Changes Needed in Each Program<br /> The implementation of most changes will require some programming alterations. Even a change of data attributes will often necessitate program changes. Some of these will be minor in nature, while others may be extremely difficult and time-consuming to implement.<br /><br />Task 4: Conduct Testing<br /><br />Software change testing is normally conducted by both the user and software maintenance test team. The testing is designed to provide the user assurance that the change has been properly implemented. <br />An effective method for conducting software maintenance testing is to prepare a checklist providing both the administrative and technical data needed to conduct the test. This ensures that everything is ready at the time the test is to be conducted.<br /><br />Task 5: Develop/Update the Training Material<br /><br />Updating training material for users, and training users, is not an integral part of many software change processes. Therefore, this task description describes a process for updating training material and performing that training.<br />The training requirements are incorporated into existing training material. Therefore, it behooves the application project personnel to maintain an inventory of training material.<br /><br />• Training Plan Work Paper<br />• Training Material Inventory Form<br />• Prepare Training Material<br />• Conduct Training<br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br />2.6 Testing in a Multiplatform Environment<br /><br />Overview:<br /> <br /> Each platform on which the software is designed to execute operationally may have slightly different characteristics. These distinct characteristics include various operating systems, hardware configurations, operating instructions, and supporting software, such as database management systems. The objective of testing is to determine whether the software will produce the correct results on various platforms.<br /><br />Objective:<br /> <br /> The objective of this process is to validate that a single software package executed on different platforms will produce the same results. The test process is basically the same that was used in parallel testing.<br /><br />Concerns:<br /><br /> There are three major concerns in multiplatform testing:<br /><br />• The platforms in the test lab will not be representative of the platforms in the real world.<br />• The software will be expected to work on platforms not included in the test labs.<br />• The supporting software on various platforms is not comprehensive.<br /><br />Workbench:<br /><br /> Most tasks assume that the platforms will be identified in detail, and that the software to run on the different platforms has been previously validated as being correct.<br /><br />Input:<br /> <br /> The two inputs needed for testing in a multiplatform environment are as follows:<br /><br />• List of platforms on which software must execute.<br />• Software to be tested<br /><br />Do Procedures:<br /><br /> The following tasks should be performed to validate that software performs consistently in a multiplatform environment.<br /><br />Task 1: Define Platform Configuration Concerns<br /> The first task is to develop a list of potential concerns about that environment. The testing that follows will then determine the validity of those concerns. The recommended process for identifying concerns is error guessing.<br /> Error guessing requires two prerequisites.<br /> <br />1. The error – guessing group understands how the platform works.<br />2. The error – guessing group knows how the software functions.<br /><br />Task 2: List Needed Platform Configurations<br /> The test must identify the platforms that must be tested. Ideally this list of platforms and detailed description of the platforms would be input to the test process. The needed platforms are either those that will be advertised as acceptable for using the software, or platforms within an organization on which the software will be advertised as acceptable for using the software, or platforms within an organization on which the software will be executed. Testers must then determine whether those platforms are available for testing. If the exact platform is not available, the testers need to determine whether an existing platform is acceptable.<br /><br />Task 3: Assess Test Room Configurations<br /> The testers need to make a determination as to whether the platform available in the test room are acceptable for testing. This involves two steps:<br />1. Document the platform to be used for testing, if any is available, on the work paper.<br />2. Make a determination as to whether the available platform is acceptable for testing.<br /><br />Task 4: List Structural Components Affected by the Platform(s)<br /> Structural testing deals with the architecture of the system. Architecture describes how the system is put together. It is used in the same context that an architect designs a building. Some of the architectural problems that could affect computer processing include:<br /> <br /> Internal limits on number of events that can occur in a transaction <br /> Maximum size of fields<br /> Disk storage limitations<br /> Performance limitations <br /><br />Task 5: List Interface – Platform Effects<br /> Systems tend to fail at interface points, an interface being when control is passed from one processing component to another as, for example, when data is retrieved from a database, output reports are printed or transmitted, or a person interrupts processing to make a correction.<br /><br /> This is a two-part task. Part one is to identify the interfaces within the software systems. These interfaces should be readily identifiable in the user manual for the software. The second part is to determine whether those interfaces could be impacted by the specific platform on which the software executes.<br /> At the conclusion of this task the tests that will be needed to validate multiplatform operations will have been determined.<br /><br />Task 6: Execute the Tests<br /> The platform test should be executed in the same manner as other tests are executed. The only difference may be that the same test would be performed on multiple platforms to determine that consistent processing occurs.<br /><br />Check Procedures:<br /> Prior to completing multiplatform testing a determination should be made that testing was performed correctly.<br /><br />Output:<br /> The output from this test process is a report indicating:<br /><br />• Structural components that work or don’t work by platform<br />• Interfaces that work or don’t work by platform<br />• Multiplatform operational concerns that have been eliminated or substantiated<br />• Platforms on which the software should operate, but that have not been tested.<br /><br />Guidelines:<br /> Multiplatform testing is a costly, time-consuming, and extensive component of testing. The resources expended on multiplatform testing can be significantly reduced if that testing focuses on predefined multiplatform concerns. Identified structural components that might be impacted by multiple platforms should compromise most of the testing. This will focus the testing on what should be the major risks faced in operating a single software package on many different platforms.<br /> <br /><br /><br />2.7 Testing Specialized Systems and Applications<br /><br />2.7.1 Testing Web-based Systems<br /><br />Overview:<br /><br />The client workstations are networked to a web server, either through a remote dial-in-connection or through a network such as a local area network (LAN) or wide area network. As the web server receives and processes requests from the client workstation, requests may be sent to the application server to perform actions such as data queries, electronic-commerce transactions, and so forth.<br /><br />Objective:<br /><br />The objective is to assess the adequacy of the web components of software applications. Web-based testing generally only needs to be done once for any applications using the web.<br /><br />Concerns:<br /><br />The concerns that the tester should have when conducting web-based testing are as follows:<br /><br />• Browser compatibility<br />• Functional correctness<br />• Integration<br />• Usability<br />• Security<br />• Performance<br />• Verification of code<br />An additional concern is that web terminology will not be understood by the web based testers. The following are the common web terms.<br /><br /><br />• Browser<br />• Hyper Text Markup Language (HTML)<br />• Platform<br />• Java<br />• Web server<br />• Application server<br />• Back end<br />• Fire wall<br />• Uniform Resource Locator (URL)<br />• Electronic commerce (e-Commerce)<br />• Component<br />• Common Gateway Interface (CGI)<br />• Bandwidth<br />• Secure Socket Layer (SSL)<br />• File Transfer Protocol (FTP)<br /><br />Input:<br /><br />The input to this test process is the description of web-based technology used in the systems being tested.<br />The following list shows how web-based systems differ from other technologies. <br /><br />• Uncontrolled user interfaces (Browsers)<br />• Complex Distributed systems<br />• Security issues<br />• Multiple layers in architecture<br />• New terminology and skill sets<br />• Object-oriented<br />• Nonstandardized<br /><br />Do Procedures:<br /><br />Testing of a web-based system involves performing the following four tasks.<br /><br />Task 1: Select Web-Based Risks to Include in the Test Plan<br /><br />Risks are important to understand because they reveal what to test. Each risk points to an entire area of potential tests. In addition, the degree of testing should be based on risk. The risks are briefly listed below followed by a more detailed description of the concerns associated with each risk.<br /><br />• Security<br />• Performance<br />• Correctness<br />• Compatibility ( configuration )<br />• Reliability<br />• Data Integrity<br />• Usability<br />• Recoverability<br /><br />Key areas of concern: Security Risk<br /><br />The following are the security risks that need to be addressed in an Internet application test plan.<br /><br />• External intrusion<br />• Protection of secured transactions<br />• Viruses<br />• Access control<br />• Authorization levels<br /><br />Key areas of Concern: Performance<br /><br />System performance can make or break an Internet application. There are several types of performance testing that can be done to validate the performance levels of an application.<br /> Typically, the most common kind of performance testing for Internet applications is load testing. Load testing seeks to determine how the application performs under expected and greater-than-expected levels of activity. Application load can be assessed in a variety of ways:<br /><br />• Concurrency<br />• Stress<br />• Throughput<br /><br />Key areas of Concern: Correctness<br /><br />One of the most important areas of concern is that the application functions correctly. This can include not only the functionality of buttons and “behind the scenes” instructions, but also calculations and navigation of the application.<br /><br />Key areas of Concern: Compatibility<br /><br />Compatibility is the ability of the application to perform correctly in a variety of expected environments. Two of the major variables that affect web-based applications are the operating systems and browsers.<br /><br />Common operating systems include<br /> DOS/Windows<br /> Mac OS<br /> UNIX<br /> VMS<br /> Sun and SGI (Silicon Graphics Inc.)<br /> Linux<br /><br />Popular browsers include<br /><br /> Microsoft Internet Explorer<br /> Netscape Communicator<br /> Mosaic<br /><br />There are many other lesser-known browsers. You can find information on all different types of browsers at www.browserwatch.com.<br /><br />Browser Configuration:<br /><br />Each browser has configuration options that affect how it displays information. <br />Some of the main things to consider from a hardware compatibility standpoint are:<br /><br />• Monitors, video cards, and video RAM<br />• Audio, video, and multimedia support<br />• Memory (RAM) and hard drive space<br />• Bandwidth access<br /><br />Browser differences can make a web application appear differently to different people. The differences may appear in any of the following areas.<br /><br />• Print handling<br />• Reload<br />• Navigation<br />• Graphics filters<br />• Caching<br />• Dynamic page generation<br />• File downloads<br />• E-mail functions<br /><br />Key areas of Concern: Reliability<br /><br />Because of the continuous uptime requirements for most Internet applications, reliability is a key concern. However, reliability can be considered in more than system availability; it can also be expressed in terms of the reliability of the information obtained from the application:<br /><br />• Consistently correct results<br />• Server and system availability<br /><br />Key areas of Concern: Data Integrity<br /><br />Not only must the data be validated when it is entered into the web application, but it must also be safeguarded to ensure the data stays correct.<br /><br /> Ensuring only correct data is accepted. This can be achieved by validating the data at the page level when it is entered by a user.<br /><br /> Ensuring data stays in a correct state: This can be achieved by procedures to back up data and ensure that controlled methods are used to update data.<br /><br />Key areas of Concern: Usability<br /><br />If users or customers find an Internet application hard to use, they will likely go to a competitor’s site. Usability can be validated and usually involves the following:<br /><br />• Ensuring the application is easy to use and understand<br />• Ensuring that users know how to interpret and use the information delivered from the application<br />• Ensuring that navigation is clear and correct<br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br />Key areas of Concern: Recoverability<br /><br />Internet applications are more prone to outages than systems that are more centralized or located on reliable, controlled networks. The remote accessibility concerns important:<br /><br />• Lost connections<br />o Timeouts<br />o Dropped Lines<br />• Client system crashes<br />• Server system crashes or other application problems<br /><br /><br /> <br /><br /><br />Task 2: Select Web-Based Tests<br /><br />Select the type of test based on the requirement and necessity from among the following.<br /><br />• Unit or Component<br />• Integration<br />• System<br />• User Acceptance (Business Process Validation)<br />• Performance<br />• Load/Stress<br />• Regression<br />• Usability<br />• Compatibility<br /><br />Task 3: Select Web-Based Test Tools<br /><br />Effective web-based testing necessitates the use of web-based tools.<br /> <br />HTML Test Tools: Although many web development packages include an HTML checker, there are ways to perform a verification of HTML if you do not use/have such a feature. An example of a standalone tool is Doctor HTML by Imagineware (http://drhtml.imagiware.com/).<br /><br />Site Validation: Site validation tools check your web applications to identify inconsistencies and errors such as:<br /><br />• Moved pages<br />• Orphaned pages<br />• Broken links<br /><br />An example of a site validation tool is SQA Site Check by Rational Software.<br /><br />Java Test Tools: Java test tools are specifically designed for testing Java applications. Examples include:<br /><br />• NuMega TrueTime Java Edition<br />• Sun Test Suite by Sun Microsystems<br />• Silk Test by Segue software<br />• SilkScope by Segue Software<br />• Silk Spec by Segue Software<br /><br />Load / Stress Testing Tools: Load / Stress tools evaluate web-based systems when subjected to large volumes of data or transactions. Examples of tools that can simulate numerous virtual users and vary transaction rates include:<br /><br />• Astra Site Test by Mecury Interactive<br />• Silk Performer by Seague Software<br /><br />Test Case Generators: Test case generators create transactions for use in testing. This tool can tell you what to test, as well as create test cases that can be used in other test tools. An example of a test case generator is the Astra Quick Test by Mercury Interactive.<br />This tool captures business processes into a visual map to generate data-driven tests automatically. Test scripts can be imported to Mercury’s Load Runner and managed by Test Director.<br /><br /><br /><br /><br /><br /><br /><br /><br /><br />2.7.2 Testing Off-the-Shelf Software<br /><br />Overview:<br /><br />Off-the-shelf software must be made to look attractive if it is to be sold. Thus, the developer of off-the-shelf software (OTSS) will emphasize the benefits of the software. Unfortunately, there is often a difference between what the user believes the software can accomplish and what it actually does accomplish.<br /><br />Objective:<br />The objective of this off-the-shelf testing process is to provide the highest possible assurance of correct processing with a minimal effort. However, the process should be used for noncritical off-the-shelf software.<br /><br />Concerns:<br />The user of off-the-shelf software should be concerned with these areas.<br />• Task/items missing.<br />• Software fails to perform.<br />• Extra features.<br />• Does not meet business needs.<br />• Does not meet operational needs.<br />• Does not meet people needs.<br /><br /><br />Input:<br /> There are two inputs to this step. The first input is the manuals that accompany the OTSS. These normally include installation and operation manuals. The manuals describe what the software is designed to accomplish, and how to perform the tasks necessary to accomplish the software functions.<br />Do Procedures:<br /> The execution of this process involves four tasks plus the check procedures. The process assumes that the individual(s) performing the test has knowledge of how the software will be used in the organization. If the tester does not know how it will be used, an additional step is required for the tester to identify the functionality that will be needed by the users of the software. The four tasks are described as follows:<br /><br />Task 1: Test Business Fit<br /> The objective of this task is to determine whether the software meets your needs. The task involves carefully defining your business needs and then verify whether the software in question will accomplish them.<br />The first step of this task is defining business functions in a manner that can be used to evaluate software capabilities. The second step of this task is matching software capabilities against business needs. <br />Step 1: Completeness of needs Specification<br /> This test determines whether you have adequately defined your needs. Your needs should be defined in terms of the following two categories of outputs:<br /><br />1. Output products/reports. Output products/reports are specific documents that you want produced by the computer system. <br />2. Management Decision Information. This category tries to define the information needed for decision-making purposes.<br /><br /><br />Testing the Completeness of Needs:<br /> The first test to be performed for computer software is a test of completeness of needs. This has proved to be one of the major causes of problems in OTSS.<br /><br />The objective of this first test is to help you determine how completely your needs are defined. The test is based on the criteria learned by the large corprations.<br /><br /> Step 2: Critical Success Factor Test:<br /> This test tells whether the software package will be successful in meeting your business needs.<br />Critical Success Factors (CSF) are those criteria or factors that must be present in the acquired software for it to be successful. <br /> <br />Some of the common CSF’s for OTSS you may want to use are:<br /><br />• Ease of use<br />• Expandability<br />• Maintainability<br />• Cost-effectiveness<br />• Transferability<br />• Reliability<br />• Security<br /><br />Task 2: Test Operational Fit<br /><br /> The objective of this task is to determine whether the software will work in your business. Within your business there are several constraints that must be satisfied before you acquire the software, including:<br /><br />• Computer hardware constraints<br />• Data preparation constraints<br />• Data entry constraints<br /><br />Step 1: Compatibility with your hardware, operating system, and other software packages<br /><br /> This is not a complex test. It involves a simple matching between your processing capabilities and limitations, and what the vendor of the software says is necessary to run the software package. The most difficult part of this evaluation is ensuring the multiple software packages can properly interface.<br /><br />Hardware compatibility. List the following characteristics for your computer hardware.<br />• Hardware vendor<br />• Amount of main storage<br />• Disk storage unit identifier<br />• Disk storage unit capacity<br />• Type of printer<br />• Number of print columns<br />• Type of terminal<br />• Maximum terminal display size<br />• Keyboard restrictions<br /><br />Operating systems compatibility. For the operating system used by your computer hardware, list:<br />1. Name of operating system <br />2. Version of operating system in use<br /><br /> <br />Program compatibility. List all of the programs with which you expect or would like this specific application to interact. <br /><br />Data compatibility. In many cases, program compatibility will answer the questions on data compatibility. However if you created special files you may need descriptions of the individual data elements and files.<br /><br />Step 2: Integrating the software into Your business system Work flow<br /><br />Each computer makes certain assumptions. Unfortunately, these assumptions are rarely stated in the vendor literature.<br />The danger is that you may be required to do some manual processing functions that you may not want to do in order to utilize the software.<br /><br /> The objective of this test is to determine whether you can plug the OTSS into your existing manual system without disrupting your entire operation. Remember that:<br /><br /><br />• Your manual system is based on a certain set assumptions.<br />• Your manual system uses existing forms, existing data, and existing procedures.<br />• The computer system is based on a set of assumptions.<br />• The computer system uses a predetermined set of forms and procedures.<br />• Your current manual system and the new computer system may be incompatible.<br />• If they are incompatible, the computer system is not going to change – you will have to.<br />• You may not want to change – then what?<br /><br /><br />Performing the Data Flow Diagram Test<br /> <br /> The data flow diagram is really more than a test. At the same time that it tests whether you can integrate the computer system into your business system, it shows you how to do it. It is both a system test and a system design methodology incorporated into a single process. So, to prepare the document flow narrative or document flow description, these three tasks must be performed:<br /><br />• Prepare a document flow of your existing system.<br />• Add the computer responsibility to the data flow diagram.<br />• Modify the manual tasks as necessary.<br /><br />The objective of this process is to illustrate the type and frequency of work flow changes that will be occurring. At the end of this test, you will need to decide whether you are pleased with the revised work flow. If you feel the changes can be effectively integrated into your work flow, the potential computer system has passed the test. If you feel the changes in the work flow will be disruptive, you may want to fail the software in this test and either look for other software or continue manual processing.<br /><br />Step 3: Demonstrating the Software in Operation<br /><br />This test analyzes the many facets of software. Software developers are always excited when their program goes to what they call “end of job”. This means that it executes and concludes without abnormally terminating.<br />Demonstration can be performed in either of the following ways:<br /><br />• Computer store – controlled demonstration<br />• Computer site demonstration<br />These aspects of computer software should be observed during the demonstration:<br /><br />• Understandability<br />• Clarity of communication<br />• Ease of use of instruction manual<br />• Functionality of the software<br />• Knowledge to execute<br />• Effectiveness of help routines<br />• Evaluate program compatibility<br />• Data compatibility<br />• Smell test<br /><br /><br />Task 3: Test people Fit<br /> <br /> The objective of this task is to determine whether your employees can use the software. This testing consists of ensuring that your employees have or can be taught the necessary skills.<br /><br /> This test evaluates whether people possess the skills necessary to effectively use computers in their day-to-day work. The evaluation can be of current skills, or the program that will be put into place to teach individuals the necessary skills. Note that this includes the owner-president of the organization as well as the lowest – level employee in the organization.<br /> The results of this test will show:<br /> <br />• The software can be used as is.<br />• Additional training/support is necessary.<br />• The software is not usable with the skill sets of the proposed users.<br /><br /><br />Task 4: Validate Acceptance Test Software Process<br /> <br /> The objective of this task is to validate that the off-the-shelf software will in fact meet the structural and functional needs of the user of the software.<br /> <br /> Step 1: Create Functional Test Conditions<br /><br /> It is important to understand the difference between correctness and reliability because it impacts both testing and operation. The types of test conditions that are needed to verify the functional accuracy and completeness of computer processing include:<br /><br />• All transaction types to ensure they are properly processed<br />• Verification of all totals <br />• Assurance that all outputs are produced<br />• Assurance that all processing is complete<br />• Assurance that controls work<br />• Reports that are printed on the proper paper, and in the proper number of copies<br />• Correct field editing<br />• Logic paths in the system that direct the inputs to the appropriate processing routines<br />• Employees that can input properly<br />• Employees that understand the meaning and makeup of the computer outputs they generate<br /><br />Step 2: Create Structuring Test Conditions<br /><br />Structural, or reliability, test conditions are challenging to create and execute. Novices to the computer field should not expect to do extensive structural testing. They should limit their structural testing to conditions closely related to functional testing. How ever, structural testing is easier to perform as computer proficiency increases. This type of testing is quite valuable.<br /> Some of the architectural problems that could affect computer processing include:<br /><br />• Internal limits on number of events that can occur in a transaction (e.g., number of products that can be included on an invoice)<br />• Maximum size of fields (e.g., quantity is only two positions in length, making it impossible to enter an order for over 99 items)<br />• Disk storage limitations(e.g., you are only permitted to have X customers)<br />• Performance limitations (e.g., the time to process transactions jumps significantly when you enter over X transactions).<br /><br />Check Procedures<br /> <br />At the conclusion of this testing process, the tester should verify that the OTSS test process has been conducted effectively.<br /> <br /> Output<br /> <br />There are three potential outputs as a result of executing the OTSS test process.<br /><br />• Fully acceptable<br />• Unacceptable<br />• Acceptable with conditions<br /><br /><br /><br /><br /><br /><br /><br /> <br /><br /> <br />2.7.3 Testing Client / Server Systems<br /><br /> Concerns:<br /> The concerns about client/server systems reside in the area of control. The testers need to determine that adequate controls are in place to ensure accurate, complete, timely, and secure processing of client software systems. The testers must address these five concerns:<br /><br />• Organizational readiness<br />• Client installation<br />• Security<br />• Client data<br />• Client/server standards<br /><br />Input:<br /> The input to this test process will be the client/server system. This will include the server technology and capabilities, the communication network, and the client workstations that will be incorporated into the test. <br /><br />Do Procedures:<br /> The testing for client/server software includes the following three tasks.<br /><br />Task 1: Assess Readiness <br /> Client / server programs should have sponsors. Ideally these are the directors of information technology and the impacted user management. It is the responsibility of sponsors to ensure that the organization is ready for the client/server technology. However, those charged with installing the new technology should provide the sponsor with a readiness assessment.<br />The following are the dimensions to the readiness assessment.<br />• Motivation<br />• Investment<br />• Client/Server skills<br />• User education<br />• Culture<br />• Client/Server support staff<br />• Client/Server aids/tools<br />• Software development process maturity<br /><br /><br /><br /><br /><br /><br /><br />Task 2: Assess Key Components<br />Experience shows that if the key or driving components of technology are in place and working they will provide most of the assurance necessary for effective processing. Four key components are identified for client/server technology.<br />1. Client installations are done correctly.<br />2. Adequate security is provided for the client/server system.<br />3. Client data is adequately protected.<br />4. Client/Server standards are in place and working.<br /><br />Task 3: Test the system: The testing of the client/server system should be performed taking into account the four key components listed above.<br />Output: The output from this system is the test report indicating what works and what does not work. The report should also contain recommendations by the test team for improvements where appropriate.<br /><br /><br /><br /> <br />2.8 Evaluate Test Effectiveness<br /><br />The major concern that testers should have is that their testing processes will not improve. Without improvement, testers will continue to make the same errors and perform testing inefficiently time after time.<br /><br />Inputs for this should be the results of conducting software tests. The input should be an accumulation of test results over time. The type of information needed as input includes, but is not limited to:<br /><br />• Number of tests conducted<br />• Resources expended in testing<br />• Test tools used<br />• Defects uncovered<br />• Size of software tested<br />• Days to correct defects<br />• Defects not corrected<br />• Defects uncovered during operation that were not uncovered during testing<br />• Developmental phase in which defects were uncovered<br />• Names of defects uncovered<br /><br />Once a decision has been made to formally assess the effectiveness of testing, an assessment process is needed. A seven-task approach to assessing the effectiveness of systems testing is as follows:<br /><br />Task 1: Establish Assessment Objectives<br /><br />The objectives for performing the assessment should be clearly established. If objectives are not defined, the measurement process may not be properly directed and thus may not be effective. These objectives include:<br /><br /> Identify test weaknesses<br /> Identify the need for new test tools<br /> Assess project testing<br /> Identify good test practices<br /> Identify poor test practices<br /> Identify economical test practices<br /><br /><br /> Task 2: Identify what to measure<br /><br />The categories of information needed to accomplish the measurement objectives should be identified. The following are the five characteristics of application system testing that can be measured.<br /><br />• Involvement<br />• Extent of Testing<br />• Resources<br />• Effectiveness<br />• Assessment<br /><br /><br />Task 3: Assign Measurement Responsibility<br /><br />One group should be made responsible for collecting and assessing testing performance information. Without a specific accountable individual, there will be no catalyst to ensure that the data collection and assessment process occurs. The responsibility for the use of information services resources resides with IT management. However, they may desire to delegate the responsibility to assess the effectiveness of the test process to a function within the department.<br /><br /><br />Task 4: Select Evaluation Approach<br /><br />Several approaches can be used in performing the assessment process. The one that best matches the managerial style should be selected. The following are the most common approaches to evaluating the effectiveness of testing.<br /><br />• Judgment<br />• Compliance with methodology<br />• Problems after test<br />• User reaction<br />• Testing metrics<br /><br />The metrics approach is recommended because once established it is easy to use, and can be proven to show a high correlation to effective and ineffective practices. A major advantage to metrics is that the assessment process can be clearly defined, will be known to those people who are being assessed, and is specific enough so that it is easy to determine which testing variables need to be adjusted to improve the effectiveness, efficiency, and/or economy of the test process.<br /><br />Task 5: Identify Needed Facts<br /><br />The facts necessary to support the approach selected should be identified. The metrics approach clearly identifies the type of data needed for the assessment process. The needed information includes,<br /><br />• Change characteristics<br />• Magnitude of system<br />• Cost of process being tested<br />• Cost of test<br />• Defects uncovered by testing<br />• Defects detected by phase<br />• Defects uncovered after test<br />• Cost of testing by function<br />• System complaints<br />• Quantification of defects<br />• Who conducted the test<br />• Quantification of correctness of defect<br /><br /><br />Task 6: Collect Evaluation Data<br /><br />Once the data has been identified, a system must be established to collect and store the needed data in a form suitable for assessment. This may require a collection mechanism, a storage mechanism, and a method to select and summarize the information. Wherever possible, utility programs should be used for this purpose.<br /><br />Task 7: Assess the Effectiveness of Testing<br /><br />The raw information must be analyzed to draw conclusions about the effectiveness of systems testing. From this analysis, action can be taken by the appropriate party.<br /><br />Use of Testing Metrics: Testing metrics are relationships that show a high positive correlation to that which is being measured. Metrics are used in almost all disciplines as a basis of performing an assessment of the effectiveness of some process. Some of the common assessments familiar to most people in other disciplines include:<br /><br />• Blood pressure (medicine)<br />• Student aptitude test (education)<br />• Net profit (accounting)<br />• Accidents per worker-day (safety)<br /><br />The following are the suggested metrics for evaluating application system testing.<br /><br />1. User participation (user participation test time divided by total number of instructions).<br />2. Instructions exercised (number of instructions exercised versus total number of instructions).<br />3. Number of tests (number of tests versus size of system tested).<br />4. Paths tested (number of paths tested versus total number of paths).<br />5. Acceptance criteria tested (acceptance criteria verified versus total acceptance criteria)/<br />6. Test cost (test cost versus total system cost).<br />7. Cost to locate defect (cost of testing versus the number of defects located in testing).<br />8. Achieving budget (anticipated cost of testing versus the actual cost of testing).<br />9. Detected production errors (number of errors detected in production versus application system size).<br />10. Defects uncovered in testing (defects located by testing versus total system defects).<br />11. Effectiveness of test to business (loss due to problems versus total resources processed by the system).<br />12. Asset value of the test (test cost versus assets controlled by system).<br />13. Rerun analysis (rerun hours versus production hours).<br />14. Abnormal termination analysis (installed changes versus number of application system hang-ups).<br />15. Source code analysis (number of source code statements changed versus the number of tests).<br />16. Test efficiency (number of tests required versus the number of system errors).<br />17. Startup failure (number of program changes versus the number of failures the first time the changed program is run in production).<br />18. System complaints (system complaints versus number of transactions processed).<br />19. Test automation (cost of manual test effort versus total test cost).<br />20. Requirement phase testing effectiveness (requirements test cost versus number of errors detected during requirements phase).<br />21. Design phase testing effectiveness (design test cost versus number of errors detected during design phase).<br />22. Program phase testing effectiveness (program test cost versus number of errors detected during program phase).<br />23. Test phase testing effectiveness (test cost versus number of errors detected during test phase).<br />24. Installation phase testing effectiveness (installation test cost versus number of errors detected during installation phase).<br />25. Maintenance phase testing effectiveness (maintenance test cost versus number of errors detected during maintenance phase).<br />26. Defects uncovered in test (defects uncovered versus size of systems).<br />27. Untested change problems (number of tested changes versus problems attributable to those changes).<br />28. Tested change problems (number of tested changes versus problems attributable to those changes).<br />29. Loss value of test (loss due to problems versus total resources processed by the system).<br />30. Scale of ten (assessment of testing rated on a scale of ten).<br /><br /><br /> <br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br />2.9 Building Test Documentation<br /><br /><br /> The test documentation should be an integral part of the documentation of application systems. Information services documentation standards should specify the type and extent of test documentation to be prepared and maintained.<br /><br />The uses for that documentation include:<br /><br />• Verify correctness of requirements.<br />• Improve user understanding of information services.<br />• Improve user understanding of application systems.<br />• Justify test resources.<br />• Determine test risk.<br />• Create test transactions.<br />• Evaluate test results.<br />• Reset the system.<br />• Analyze the effectiveness of the test.<br /><br />Types:<br /> The two general categories of test documentation are:<br /><br /> Test Plan: The plan for the testing of the application system, including detailed specifications, descriptions, and procedures for all tests, and test data reduction and evaluation criteria.<br /><br /> Test analysis documentation: Documentation that covers the test analysis results and findings; presents the demonstrated capabilities and deficiencies for review; and provides a basis for preparing a statement of the application system readiness for implementation.<br /><br />Test plan Documentation:<br /> The test plan outlines the process to be followed in testing the application system. It includes the plan, the specifications for the test and how those tests will be evaluated, plus the description of the tests themselves. <br /><br />The documentation is divided into different sections.<br /><br />Section 1: General Information<br /> Summary<br /> Environment and pretest background<br /> References: References that are helpful in preparing for the test or conducting the test should be listed, such as:<br />o Project request (authorization)<br />o Previously published documents on the project (project deliverables)<br />o Documentation concerning related projects<br />o Testing policies, standards, and procedures<br />o Books and articles describing test processes, techniques, and tools.<br />Section 2: Plan<br /><br /> Software Description<br /> Milestones<br /> Testing (Identify Location).<br />o Schedule<br />o Requirements<br /> Equipment<br /> Software<br /> Personnel<br />o Testing materials<br />• Documentation<br />• Software to be tested and its medium<br />• Test inputs and sample outputs<br />• Test control software and work papers<br />o Test Training<br /> Testing (Identify Location)<br /><br /><br />Section 3: Specifications and Evaluations<br /> <br /> Specifications<br />o Requirements<br />o Software functions<br />o Test/function relationships<br />o Test progression<br /> Methods and constraints<br />o Methodology<br />o Conditions<br />o Extent<br />o Data Recording<br />o Constraints<br /> Evaluation<br />o Criteria<br />o Data Reduction<br /><br />Section 4: Test Descriptions<br /><br /> Test (Identify)<br />o Control<br />o Inputs<br />o Outputs<br />o Procedures<br /><br /><br />Test Analysis Report Documentation<br /><br /> The test analysis report documents the results of the test. It serves the dual purposes of recording the results for analysis, and a means to report those analyses to involved parties.<br /><br /><br />The documentation is divided into different sections.<br /><br /><br />Section 1: General Information<br /><br />• Summary<br />• Environment <br />• References<br />o Project Request<br />o Previously published<br />o Documentation concerning related projects<br />Section 2: Test Results and Findings<br /> <br />• Test (Identify)<br />o Dynamic Data Performance<br />o Static Data Performance<br /><br />Section 3: Software Function Findings<br /> <br />• Function (Identify)<br />o Performance<br />o Limits<br /><br />Section 4: Analysis Summary<br /><br />• Capabilities<br />• Deficiencies<br />• Recommendations and Estimates<br />• Opinion<br /><br /><br /><br /><br />3.0 Unit Summary<br />In this session we have learnt<br />1. The process of test execution.<br />2. Different Approaches to test execution.<br />3. Test documentation.<br /><br />3.1 Exercise<br />Answer the following in short<br />1. What process would you follow for installation testing<br />2. What is the process for testing web application and what are the areas of concern?<br />3. What is the process for testing off-the-self software ?<br />4. What is test effectiveness?<br />5. List the Uses of test documentation.Rajesh Babu Rajamanickamhttp://www.blogger.com/profile/16036650026261051481noreply@blogger.com0tag:blogger.com,1999:blog-8180223062941193498.post-71058426892395795992008-01-14T16:58:00.000-08:002008-01-14T16:59:22.469-08:00Testing Types<span style="font-weight:bold;">Testing Types</span><br />TABLE OF CONTENTS <br />1.0 General: 3<br /> Module Objectives: 3<br /> Module Structure: 3<br />2.0 Testing Types 4<br /> Black Box Testing 4<br /> White Box Testing: 5<br />3.0 Structural Vs Functional 7<br /> Overview of Structural Testing 7<br /> Overview of Functional Testing 7<br /> Structural Vs Functional 8<br />4.0 Regression Testing 9<br /> What is regression testing 9<br /> Types of regression testing 9<br /> How to select test cases for regression testing 9<br /> Resetting the test cases for regression testing 11<br /> How to conclude the results of a regression testing 12<br /> Can we apply the regression test guidelines for patch/upgrade releases 13<br /> How do I find out which test case to be executed for a particular defect fix 13<br />5.0 Performance Testing 14<br /> What is Performance Testing? 14<br /> Factors Influencing for Performance Test 15<br /> Performance Testing Approach 16<br /> Flow Diagram for Performance Engineering 17<br /> Performance Testing Approach Summary 18<br /> - Defines tasks. 18<br /> Exemplar Scenario for Performance Testing 18<br /> Tools for Performance Testing 21<br />6.0 Installation Testing 22<br /> What Is Installation Testing 22<br /> Flow Chart for Installation Testing 22<br /> Who & How to do Installation Testing 23<br /> What to Check in Installation Testing 23<br />7.0 Compatibility Testing 24<br /> Process for Compatibility 25<br /> Identification of Parameters: 25<br /> Combination of Parameters to Test 26<br /> Test Case Identification: 26<br /> Setup Management 27<br /> Ensuring Pre-requisites 27<br /> Execution of Test: 27<br /> What is Ghosting ? 29<br />8.0 Acceptance 30<br /> Acceptance Criteria for Requirements Phase 30<br /> Acceptance Criteria for Planning Phase 30<br /> Acceptance Criteria for Design Phase 30<br /> Acceptance Criteria for Execution Phase 30<br />9.0 Manual vs. Automation 31<br /> Manual Testing 31<br /> Automation Testing 31<br /> Flow diagram to determine Test scenarios for Manual and Automation 33<br /> Summary of Manual Vs Automated Testing 34<br /> Why & When One Should Consider Automation 34<br /> A Sensible Approach to Automation 36<br />10.0 Unit summary 37<br /> Exercise 37 <br /><br />1.0 General:<br /><br />Testing as such is a very wide domain, with its own complex and lengthy processes.<br />When a project is taken up for testing, deciding the kind of testing to be performed is a major task for the test engineers.<br />Typically the decision of “Kind of testing” , is either decided by the Management or the client. In most of the real-time scenarios, client specifies the kind of testing to be done for the given project.<br />Depending on the testing requirement the project managers need to decide the testing strategy for the product, and prepare the schedule accordingly.<br />The test engineers need to be prepared for taking up any kind of testing on the product and hence needs to know the processes to be followed to perform the given type of testing.<br />Each type of testing requires a different approach to be adopted and the corresponding testing techniques need to be followed for the success of the project.<br />The following Module explains in detail the various kinds of testing and the techniques and processes for the same.<br />The participants are suggested to go through the module carefully and understand the concepts in detail through practice and discussion.<br />1.1 Module Objectives:<br />At the end of this module, you should:<br /><br /> Be able to define various kinds of testing<br /> Understand various processes to be followed for different types of testing<br /> Understand the difference between Manual and automated testing approaches.<br /><br />1.2 Module Structure:<br /><br />SNO Topic Duration<br />1 Introduction to testing types, Black box Testing, white box Testing 5 hours<br />2 Structural VS Functional Testing 5 hours<br />3 Regression Testing 5 hours<br />4 Installation Testing 5 hours<br />5 Performance Testing 5 hours<br />6 Compatibility Testing 5 hours<br />7<br /> Acceptance Testing 5 hours<br />8 Manual VS Automated Testing 5 hours<br /> Total Duration 40 hrs<br />2.0 Testing Types<br />There are several types of software testing that are widely used in today's IT world. When a tester knows what type of testing is needed it greatly improves the test results thereby decreasing the number of defects identified. <br />Testing can be conducted in different kinds like Manual Testing, Automation Testing, Memory Testing, Performance Testing, Usability Testing, etc. The different kinds of testing are again broadly classified into two major sectors called the “Black Box Testing” and the “White Box Testing”. We shall look into some of the commonly used testing methodologies of White Box testing and Black Box testing here.<br />2.1 Black Box Testing<br />Black Box testing is the testing all the features and functionalities of the final product. It is also called as Closed Box Testing. The Tester has no information about the internal architecture or functional mechanism of the product. Black Box testing is a wide range and has many sub kinds <br />2.1.1 Acceptance Testing:<br /> When the product is delivered for testing, it is first checked whether the product is in a functional condition to test. This is called Acceptance Testing. The Testers can generate an Acceptance Test Case, which can be run for Acceptance Testing.<br />2.1.2 Ad Hoc Testing<br /> This method is also called Exploratory Testing where in no Test Case designs are used to test the product. The Tester goes by his intuition and creativity in finding out problems with the product.<br />2.1.3 Automation Testing:<br /> This method of testing is done using third party tools. In this method, we automate, or run a particular Test Case for a specific number of times. This helps in finding out the consistent passing of the test case and also helps in finding our problems related to memory, Performance etc. More about this method is dealt in detail later.<br />2.1.4 Boundary Value Testing<br /> Certain features will require inputs to be taken from the user, like Login name in a site. There will also be a lower end limit and higher end limit to the input value to be given. In this kind of testing, we check the feature using values just below or above the lower limit and the upper limit.<br />2.1.5 Compatibility Testing<br />Compatibility Testing confirms that the Product installed does not hinder the functionality of any other product installed on the machine. Products are said to be compatible with each other if they can share data between them or if they can simultaneously reside in the same computers memory. In short Compatibility testing checks that one product works with another.<br />2.1.6 Integration Testing<br /> This testing is done when the different modules of the product have been integrated together. It is an orderly process, which is carried out until all the modules have been integrated into the system.<br /><br />2.1.7 Manual Testing<br />That part of software testing that requires operator input, analysis, or evaluation.<br />2.1.8 Performance Testing<br />The tests might determine which modules execute most often or use the most computer time. Then those modules are re-examined and recorded to run more quickly.<br />2.1.9 Regression Testing<br />This process of testing is done when new features are added or when features have been modified to confirms that the features that have been functioning are still functioning and that defects which has been fixed still remain fixed.<br /><br />2.1.10 Usability Testing<br /> Usability testing is done to check the flow of the product and the user friendliness of the product. It helps in finding out whether the user is able to interact with the product and achieve his goal. People who are novice to the product usually do this kind of test, and the Testers and Developers study the user while he is doing the process.<br /><br />2.1.11 Installation Testing<br /> Here the installation media and different kinds of installation process on different Operating systems with different configurations are tested. What good is a product if it does not install properly on different computer configurations?<br />2.2 White Box Testing:<br />White box testing involves testing at the Code Level and is done usually at the coding stage. Here the inputs required for testing is fed to the program and the output is checked. The Tester does not know or pretends not to know how the program works. There are a lot of different kinds of White Box testing. Some of the major methods are mentioned below.<br />2.2.1 Boundary Value Testing<br /> This method can be used as black box and as white box. The feature tested here is similar to Black Box testing as mentioned earlier. (See Black Box testing section)<br /><br />2.2.2 Branch Testing<br /> This testing is done to satisfy the coverage criteria such that for each decision point each possible branch be executed at least once. For example if the execution reaches a Case statement, this kind of testing should cover each of the Case conditions at least once.<br /><br />2.2.3 Top-Down and Bottom-Up Testing<br /> In the Top-Down testing the highest level modules are tested first whereas the Bottom -up testing, the lower modules are tested first, and then the higher level modules.<br /><br />2.2.4 Hybrid Testing<br />A combination of top-down testing combined with bottom-up testing of prioritized or available components.<br />2.2.5 Incremental Testing<br /> Under this each piece of a module is first tested separately. This testing makes it easy to pin down the cause of the error but to test it requires a special code. Each piece is individually tested with focus on it and is thoroughly tested.<br />2.2.6 Special Case Testing<br />This kind of testing can be conducted as both Black Box testing and White Box testing. Here testing is done by giving input values that seem likely to cause program errors like "0", "1", NULL, empty string.<br />2.2.7 Statement Testing<br />Testing that satisfies that each statement in a program is executed at least once during program testing. It is said, “An untested code is a hidden Bomb”. It is true in the sense that if it is not tested, we can never say whether it will function properly or throw up errors.<br /><br />Some of the other testing methods and terminologies used in the Testing field are:<br />Alpha Testing, Beta Testing, Assertion Testing, Big Bang Testing, Design-Based Testing, Development Testing, Error Based Testing, Equivalence Testing, etc.<br /><br /><br /><br />3.0 Structural Vs Functional<br />Testing Goal is to evaluate the system as a whole, not its parts<br /> Techniques can be structural or functional<br /> Techniques can be used in any stage that tests the system as a whole (acceptance, installation, etc.)<br /> Techniques not mutually exclusive<br />3.1 Overview of Structural Testing<br />Structural techniques<br /><br /> Stress testing - Test larger-than-normal capacity in terms of transactions, data, users, speed, etc.<br /> Execution testing - Test performance in terms of speed, precision, etc.<br /> Recovery testing - Test how the system recovers from a disaster, how it handles corrupted data, etc.<br /> Operations testing - Test how the system fits in with existing operations and procedures in the user organization<br /> Compliance testing - Test adherence to standards<br /> Security testing - Test security requirements<br />3.2 Overview of Functional Testing<br />Functional techniques<br /> Requirements test - fundamental form of testing - makes sure the system does what it’s required to do<br /> Regression testing - Make sure unchanged functionality remains unchanged<br /> Error-handling testing - Test required error-handling functions (usually user error)<br /> Manual-support testing - Test that the system can be used properly - includes user documentation<br /> Intersystem handling testing - Test that the system is compatible with other systems in the environment<br /> Control testing - Test required control mechanisms<br /> Parallel testing - Feed same input into two versions of the system to make sure they produce the same output<br /><br /><br /><br /><br /><br /><br /><br /><br /><br />3.3 Structural Vs Functional<br /><br />STRUCTURAL TESTING TECHNIQUES<br /> FUNCTIONAL TESTING TECHNIQUES<br /><br />• “White box” testing<br />• Based on statements in the code<br />• Coverage criteria related to physical parts of the system <br />• Tests how a program/system does something<br /> • “Black box” testing<br />• Based on input and output<br />• Coverage criteria based on behavior aspects<br />• Tests the behavior of a system or program<br /><br /><br /><br /><br /><br />4.0 Regression Testing<br />4.1 What is regression testing<br />Regression testing is selective retesting of the system with an objective to ensure the bug fixes work and those bug fixes have not caused any un-intended effects in the system.<br />4.2 Types of regression testing<br />There are two types of regression testing that are proposed here even though it is not being practiced or popular.<br /> A "final regression testing" is being done to validate the gold master builds and "Regression testing" being done to validate the product & failed test cases between system test cycles.<br />The final regression test cycle is conducted on an "unchanged build for a period of x days" or for a period, which was agreed as the "cook-time" for release. The product is continuously exercised for the complete duration of this cook-time. Some of the test cases are even repeated to find out whether there are failures in the final product that will reach the customer. All the bug fixes for the release should have been completed for the build used for the final regression test cycle. The final regression test cycle is more critical than any other type or phase of testing, as this is the only testing which ensures "the same build of the product that was tested reaches the customer". <br />A normal regression testing can use the builds for a period that is exactly needed for the test cases to be executed. However unchanged build is highly recommended for each cycle of regression testing. <br /><br />4.3 How to select test cases for regression testing<br /><br />It was found that some of the defects reported by customers in the past were due to last minute bug fixes creating side effects and hence selecting the test case for regression testing is really an art and not that easy.<br />The selection of test cases for regression testing<br />a. Requires knowledge on the bug fixes and how it affect the system<br />b. Includes the area of frequent defects<br />c. Includes the area which has undergone many/recent code changes<br />d. Includes the area which is highly visible to the users<br />e. Includes the core features of the product which are mandatory requirements of the customer<br /><br />Selection of test cases for regression testing depends more on the criticality of bug fixes than the criticality of the defect itself. A minor defect can result in major side effect and a bug fix for an Extreme defect can have no or a minor side effect. So the test engineer needs to balance these aspects for selecting the test cases for regression testing.<br /><br />When selecting the test cases we should not select more test cases, which are bound to fail and has no or less relevance to the bug fixes. You need to select more positive test cases than negative test cases for final regression test cycle as this may create some confusion and unexpected heat. It is also recommended that the regular test cycles before regression testing should have right mix of both positive and negative test cases. Negative test cases here I mean those test cases which are introduced newly with an intent to break the system.<br /><br />It is noticed that several companies have "constant test cases set" for regression testing and they are executed irrespective of the number and type of bug fixes. Sometimes this approach may not find all side effects in the system and in sometimes it may be observed that the effort spend on executing test cases for regression testing can be minimized if some analysis is done to find out what test cases are relevant and what are not.<br />It is a good approach to plan and act for regression testing from the beginning of project before the test cycles. One of the ideas is to classify the test cases into various Priorities based on importance and customer usage. Here I am suggesting the test cases be classified into three categories;<br /><br />• Priority-0 – Sanity test cases which checks basic functionality and are run for pre-system acceptance and when product goes thru major change. These test cases deliver a very high project value to both engineering dept and to customers.<br /><br />• Priority-1 – Uses the basic and normal setup and these test cases deliver high project value to both engineering and to customers.<br /><br />• Priority-2 – These test cases deliver moderate project value. Executed part of ST cycle and selected for regression testing on need basis.<br /><br />There could be several right approaches to regression testing which needs to be decided on "case to case" basis;<br /><br />• Case 1: If the criticality and impact of the bug fixes are LOW, then it is enough a test engineer selects a few test cases from TCDB and executes them. These test cases can fall under any Priority (0, 1 or 2). <br /><br />• Case 2: If the criticality and the impact of the bug fixes are Medium, then we need to execute all Priority-0 and Priority-1 test cases. If bug fixes need additional test cases from Priority-2, then those test cases can also selected and used for regression testing. Selecting Priority-2 test cases in this case is desirable but not a must.<br /><br />• Case 3: If the criticality and impact of the bug fixes are High, then we need to execute all Priority-0, Priority-1 and carefully selected Priority-2 test cases. <br /><br />• Case 4: One can also go thru the complete log of changes happened (can be obtained from CM engineer) because of bug fixes and select the test cases to conduct regression testing. This is an elaborate process but can give very good results.<br /><br /><br /><br /><br /><br /><br /><br /><br />4.4 Resetting the test cases for regression testing<br />In a big product release involving several rounds of testing, it is very important to note down what test cases were executed with what build and related information. This is called test case result history. In many organizations not all types of testing and all test cases were repeated for each cycle. In such cases resetting the test cases become very critical for the success of regression testing. Resetting a test case is nothing but setting a flag called NOTRUN or EXECUTE AGAIN with zero base thinking.<br /><br />RESET of test case, are not expected to be done often. Resetting of the test cases needs to be done with following considerations;<br />a. When there is a major change in the product<br />b. When there is a change in the build procedure which affect the product<br />c. Large release cycle where some test cases were not executed for a long time<br />d. You are in the final regression test cycle with a few selected test cases<br />e. Where there is a situation the expected results of the test cases could be quite different from previous cycles<br /><br />When the above guidelines are not met, you may want to RERUN the test cases rather than resetting the results of the test cases. There are only few differences between RERUN and RESET states in test cases, either way the test cases are executed but in case of RESET one has to think zero base and expect different result than what was obtained in earlier cycles and therefore those test cases affect the completion rate of testing. In case of RERUN the management need not worry about completion rate as those test cases can be considered complete except for a formality check and are expected to give same results.<br /><br />RESET is also decided based on how stable the functionalities are. If you are in Priority-1 and have reached a stage of comfort level on Priority-0 (say for example more than 95% pass rate) then you don't RESET Priority-0 test cases unless there is a major change. This is true with Priority-1 test cases when you are in Priority-2 test phase. <br /><br />Pre-system test cycle phase<br />For pre-system acceptance only Priority-0 test cases are used. For each build that is entering the system test, the build number is selected and all test cases in Priority-0 are reset to NOT RUN. The system test cycle starts only if all pre-system test cases (Priority-0) pass. Test manager or CCB, can decide exceptions if any.<br /><br />System test cycle – Priority-1 testing phase<br />After pre-system acceptance is over, Priority-1 test cases are executed. Priority-1 testing can use multiple builds. In this phase the test cases are RESET only if the criticality and impact of the bug fixes and feature additions are high. A RESET procedure during this phase may affect all Priority-0 and Priority-1 test cases and these test cases are reset to NOTRUN in TCDB.<br /><br />System test cycle – Priority-2 testing phase<br />Priority-2 testing starts after all test cases in Priority-1 are executed with an acceptable pass % as defined in test plan. In this phase several builds are used. In this phase the test cases are RESET only if the criticality and impact of the bug fixes and feature additions are very high. A RESET procedure during this phase may affect Priority-0, Priority-1 and Priority-2 test cases. <br /><br />In what way regression testing is related to above three phases?<br />Regression testing is normally done after Priority-2 testing or for the next release involving only few changes. Resetting test cases during the above phases are not called as regression testing as in my assumption regression comes into picture only after the product is stable. A testing for a release can be decided either by saying a regression testing is sufficient or we can do all phases of testing starting from Priority-0 to Priority-2. <br />A regression testing for a release can use test cases from all priorities (as mentioned before). A regression testing involving multiple priorities of test cases also needs the test cases executed in strict order i.e. Priority-0 test cases are executed first, Priority-1 next and Priority-2 test cases. <br /><br />Why we need to RESET the test cases?<br />Regression testing uses good number of test cases, which would have been executed already and associated with some results and assumptions on the result. A RESET procedure makes them to NOTRUN so that it gives a clear picture about how much of testing is still remaining, and reflect the results of the regression testing on Zero base. <br /><br />If test cases are not RESET, then the test engineers tend to report a completion rate and other results based on previous builds. This is because of the basic assumption that multiple builds can be used in each phase of the testing and a gut-feel that if something passed in the past builds, it will pass in future builds also. Regression testing doesn't go with an assumption that "Future is an extension of the past". <br />4.5 How to conclude the results of a regression testing<br /><br />Regression testing uses only one build for testing (if not it is strongly recommended). It is expected that all 100% of those test cases pass using the same build. In situations where the pass % is not 100, the test manager can look at the previous results of the test case to conclude the expected result;<br />a. If the result of a particular test case was PASS using the previous builds and FAIL in the current build, then regression failed. We need to get a new build and start the testing from scratch after resetting the test cases.<br />b. If the result of a particular test case was a FAIL using the previous builds and a PASS in the current build, then it is easy to assume the bug fixes worked. <br />c. If the result of a particular test case was a FAIL using the previous builds and a FAIL in the current build and if there are no bug fixes for this particular test case, it may mean that the result of this test case shouldn't be considered for the pass %. This may also mean that such test cases shouldn't be selected for regression. <br />d. If the result of a particular test case is FAIL using the previous builds but works with a documented workaround and <br />a. if you are satisfied with the workaround then it should considered as PASS for both system test cycle and regression test cycle<br />b. If you are not satisfied with the workaround then it should be considered as FAIL for a system test cycle but can be considered as PASS for regression test cycle.<br /><br /><br /><br /><br /><br />4.6 Can we apply the regression test guidelines for patch/upgrade releases<br /><br />The regression guidelines are applicable for both cases where<br />a. You are doing a major release of a product, executed all system test cycles and planning a regression test cycle for bug fixes.<br />b. You are doing a minor release of a product (CSPs, patches …etc) having only bug fixes, and you are planning for regression test cycles to take care of those bug fixes.<br /><br />There can be multiple cycles of regression testing that can be planned for each release, if bug fixes come in phases or to take care of some bug fixes not working with specific build. <br />4.7 How do I find out which test case to be executed for a particular defect fix<br /><br />When failing a test case it is a good practice to enter the defect number(s) along so that you will know what test cases to be executed when a bug fix arrives. Please note that there can be multiple defects that can come out of a particular test case and a particular defect can affect more than one test case. <br /><br />Even though, it is easy to do the mapping between test cases and defects using these mechanisms, the test cases that are to be executed for taking care of side effects of bug fixes, may remain as a manual process as this requires knowledge. <br /><br />5.0 Performance Testing<br />5.1 What is Performance Testing?<br /><br />Performance Testing is a critical component of entire Testing Process, It determines the actual operational boundaries and will stimulate the real world use of application.<br /><br />Performance Testing can be divided into:<br />- Load Testing<br />- Stress Testing <br />- Scalability Testing <br /><br />5.1.1 Load Testing<br />Load testing is subjecting a system to a statistically representative (usually) load, main reason for using Load is to support software reliability testing and in performance testing<br /><br />Load Testing determines the systems behavior under various workloads. <br />Objective of how the systems components react as the workload are gradually increased. <br />Usual Outcome is determination of system performance<br />-- Throughput<br />-- Response Time<br />-- CPU Load<br />-- Memory Usage<br /><br />5.1.2 Stress Testing<br />Stress tests are designed to confront programs with abnormal situations. ... Stress testing executes a system in a manner that demands resources in abnormal quantity, frequency, or volume.<br /><br /> Testing in which a system is subjected to unrealistically harsh inputs or load with inadequate resources with the intention of breaking it<br /> Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements. See also: boundary value.<br /> Stress testing is subjecting a system to an unreasonable load while denying it the resources needed to process that load (e.g., RAM, disc, interrupts, etc.)<br /><br /><br /><br />5.1.3 Scalability Testing<br />Scalability testing evaluates the effect of adding additional hardware and / or software to distribute “work” among systems components<br />Tests can be performed in variety of configurations with such variables as network speed, number and type of servers/CPU’s memory etc.<br />By increasing application’s workloads can determine overall flexibility and ability to scale for workload growth.<br />5.2 Factors Influencing for Performance Test<br /><br />(a) Speed - Does the application respond quickly enough for the intended users?<br />Speed - User Expectations<br />- Experience<br />- Psychology<br />- Usage<br />Speed - System Constraints<br />- Hardware<br />- Network<br />- Software<br />Speed - Costs<br /> Speed can be expensive<br /><br />(b) Scalability – Will the application handle the expected user load and beyond? <br />How many users…<br />- Before it gets “slow”?<br />- Before it stops working?<br />- Will it sustain?<br />How much data can it hold?<br />- Database capacity<br />- File Server capacity<br />- Back-up Server capacity<br />- Data growth rates<br /><br />(c) Stability – Is the application stable under expected and unexpected user loads? <br />What happens if…<br />- If, there are more users than we expect?<br />- All the users do the same thing?<br />- A user gets disconnected?<br />- There is a Denial of Service Attack?<br />- The web server goes down?<br />- We get too many orders for the same thing?<br /><br />(d) Confidence – Are you sure that users will have a positive experience on go-live day?<br />If you know what the performance is…<br />- You can assess risk.<br />- You can make informed decisions.<br />- You can plan for the future.<br />5.3 Performance Testing Approach<br />Evaluate System<br />- Determine performance requirements.<br />- Identify expected and unexpected user activity.<br />- Determine test and/or production architecture.<br />- Identify non-user-initiated (batch) processes.<br />- Identify potential user environments.<br />- Define expected behavior during unexpected circumstances.<br />Develop Test Assets<br />- Create Strategy Document.<br />- Develop Risk Mitigation Plan.<br />- Develop Test Data.<br />- Automated test scripts:<br /> - Plan<br />- Create <br />- Validate<br />Baseline and Benchmarks<br />- Most important for iterative testing. Baseline (single user) for initial basis of comparison <br />- Benchmark (15-25% of expected user load) determines actual state at loads expected<br /> to meet requirements.<br />Analyze Results<br />Most important & most difficult focuses on:<br />- Have the performance criteria been met?<br />- What are the bottlenecks?<br />- Who is responsible to fix those bottlenecks?<br />- Decisions.<br /><br />Tune System<br />- Engineering only, highly collaborative with development team and highly iterative.<br />- Usually, performance engineer ‘supports’ and ‘validates’ while developers / administrators<br /><br />Identify Exploratory Tests<br />- Engineering only, exploits known bottleneck.<br />- Assists with analysis & tuning.<br />- Significant collaboration with ‘tuners’.<br />- Not robust tests – quick and dirty, not often reusable or relevant after tuning is complete.<br /><br />Validate Requirements<br />- Only after Baseline and/or Benchmark tests.<br />- These tests evaluate compliance with documented requirements.<br />- Often are conducted on multiple hardware/configuration variations.<br /><br />Complete Engagement<br />Document: Package Test Assets:<br /> - Actual Results - Scripts<br /> - Tuning Summary - Documents<br /> - Known bottlenecks not tuned - Test Data<br /> - Other supporting information<br /> - Recommendation<br /><br /><br />5.4 Flow Diagram for Performance Engineering <br /> <br /><br />5.5 Performance Testing Approach Summary<br />- Ensures goals are accomplished.<br />5.6 - Defines tasks.<br />- Identifies critical decision points.<br />- Shortens testing lifecycle.<br />- Increases confidence in results.<br /><br />5.7 Exemplar Scenario for Performance Testing<br />5.7.1 Pre-Project <br />1. How many users (human and system) need to be able to use the system concurrently? <br />a. What is the total user base? <br />b. What is the projected acceptance rate? <br />c. How are the users distributed across the day/week/month? <br />Scenario<br />Assuming evenly distributed billing cycles throughout the month, users spending about 15 min each viewing and paying their bill, and the site is generally accessed between 9AM EST and 6PM PST (15 hours). Calculated concurrent users, with following formula <br /><br />(Total monthly users) / (30 days a month * 15 hours a day * 4 <br />{Note: 60min / 15min per user} = daily average concurrent user load. <br />Normally test to 200% of daily average concurrent user load. <br /> <br />If, 1 million monthly users<br /> <br />1,000,000 / (30*15*4) = 556 concurrent users. (2,222 hourly users) <br />Recommend testing up to 1000 concurrent users (4,000 hourly users). <br /><br /> 2. General performance objectives <br />a. Can service up to a peak of <????> Customers an hour without losing customers<br /> due to performance issues? <br />b. System stable and functional under extreme stress conditions? <br />3. What are the project boundaries, such as: <br />a. Are all bottlenecks resolved Or.. <br />b. Best possible performance in <????> months Or... <br />c. Continue tuning until goals are met Or... <br />d. Continue tuning until bottlenecks deemed "unfixable" for current release? <br /> <br />5.7.2 Pre-Testing <br /> 1. What are the specific/detailed performance requirements? <br />a. User Experience (preferred) - i.e. With 500 concurrent users, a user accessing <br /> the site over a LAN connection will experience not more than 5 seconds to display<br /> a small or medium bill detail report 95% of the time <br />b. Component Metric (not recommended) - i.e. Database server will keep memory <br /> Usage under 75% during all tested user loads <br />2. Detailed description of the test and production environments. <br />a. Associated risks if environments are not identical <br />b. How to best mark up performance if not identical <br /><br />3. What is the availability of system administrators/developers/architects? <br /><br />4. What is the test schedule? <br />a. Can tests be executed during business hours? <br />b. Must tests be executed during off hours? <br />c. Must tests be executed during weekends? <br /><br />5. Are system monitoring tools already installed/being used on the systems under test <br />a. Who will monitor/evaluate monitoring tools? <br /><br />6. Are any other applications/services running on the systems to be tested. <br />a. Think about associated risks to shared environments, like memory, disk I/O,<br /> drive space, etc. <br /><br />7. What are of the types of tests/users/paths desired <br />a. Target for 80% of user activity <br />b. Always model system intensive activity <br /><br />8. Based on answers, create Test Strategy Document <br /> <br /><br /><br /><br /><br /><br /><br /><br /><br />5.7.3 Test Design/Execution <br /> <br />1. Design tests to validate requirements <br />a. Create User Experience Tests <br />b. Generate loads and collect Component Metric data while under load. <br />2. Always Benchmark application in a known environment first <br />a. Architecture need not match <br />b. Benchmark tests do not need to follow user community model exactly <br />c. Benchmark tests should represent about 15% of the expected peak load <br />d. Look for problems .low hanging fruit. <br />e. Do not focus on maximizing User Experience, just look for show stopping bottlenecks <br />3. Benchmark in Prod environment if possible <br />a. Look for problems with code/implementation <br />b. Look for problems out of scope that client must fix to meet performance goals <br />(i.e. Network, Architecture, Security, Firewalls) <br />c. This benchmark must be identical to the benchmark conducted in the known environment (taking into account difference in Architecture) <br />d Do not focus on maximizing User Experience, just look for show stopping bottlenecks <br />4. Load Test <br />a. Iteratively test/tune/increase load <br />b. Start with about the same load as benchmark, but accurately depicting user <br />c. Do not tune until critical (primary) bottleneck cause is found do not tune symptoms, do not tune components that could be faster. <br />d. Re-test after each change to validate that it helped . if it didn.t, change it back and try the next most likely cause of the bottleneck. Make a note of the change in case it becomes the primary bottleneck later. <br />e. If bottlenecks are environment related (i.e. out of scope) document the bottleneck and present to PM for guidance. Stop testing until client/PM agree on approach. <br />f. *Note* If you need performance improved by 200% to reach the goal, tuning a method to execute in .2 seconds instead of .25 seconds will not fix the problem. <br /><br />5.7.4 Results Reporting <br />1. Report test results related to stated requirements only. <br />a. Show summarized data validating requirements <br />b. If requirements are not met, show data as to why and what needs to be <br /> fixed / who needs to fix it by when <br />c. Show areas of potential future improvement that are out of scope <br />d. Show summary of tuning/settings/configurations if it adds value <br />e. Be prepared to deliver formatted raw data as back-up <br /><br /><br />5.8 Tools for Performance Testing<br /><br /> Microsoft’s Web application Stress Tool<br /> Cyrano’s Open STA<br /> Empirix’s E-Test Suite 6.0<br /> Radview’s WebLoad 5.0<br /> Rational Software’s Rational Robot<br /> Mercury Interactive’s Astra LoadTest 5.4<br /> Compuware’s QA Load 4.7<br /> Segue Software’s Silk Performer 5.0<br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br />6.0 Installation Testing<br /><br />6.1 What Is Installation Testing <br /><br />Installation of product for its behavior at various adverse conditions, this type of testing is performed to ensure that all Install features and options function properly. It is also performed to verify that all necessary components of the application are, indeed, installed.<br /><br />6.2 Flow Chart for Installation Testing<br /><br /><br /><br /> <br /><br /><br /><br />6.3 Who & How to do Installation Testing<br /> A person with in-depth knowledge of System should oversee the installation and testing.<br /> Start testing basic functions and functionality and gradually add levels of complexity at each successive stage.<br /> As you complete each test, document the results and verify them against the Installation Guide. Investigate any problems and resolved them.<br /> Allow to identify and resolve design concerns during testing rather than during deployment.<br /><br />6.4 What to Check in Installation Testing<br />Perform several installation tests, which include installing the application on:<br /> Computers that have the minimum hard drive space required<br /> Computers that have the minimum RAM required<br /> Removable drives <br /> Drives other than the default drive<br /> CLEAN systems (configurations with no other software installed)<br /> DIRTY systems (configurations with other programs installed, i.e. anti-virus software, office software, etc.) <br /> Using More than One Machine to Run the Tests<br />We also test the entire user setup options (full, typical, and custom), navigational buttons (Next, Back, Cancel, etc.), and user input fields to ensure that they function properly and yield the expected result.<br /><br />The Un installation of the product also needs to be tested to ensure that all data, executables, and DLL files are removed. The un-installation of the application is tested using DOS command line, Add/Remove programs menu, and through the manual deletion of files.<br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br />7.0 Compatibility Testing<br /><br />Compatibility testing verifies that the product functions correctly on a wide variety of hardware, software, and network configurations. Tests are run on a matrix of platform hardware configurations including High End, Core Market, and Low End. <br /><br />The requirement to have a product run on different platforms defines the need to test the product for compatibility. The challenges posed are laboratory setup requirements for the hardware, software version, installation and un installation procedures, etc. These issues need to be managed against the time and budget constraints in running the functional & GUI test cases for all possible combinations.<br /><br />There are mainly three cases to address the different issues in compatibility testing:<br /><br />Case Study1 A browser-based decision support system developed in Java, which should tested for compatibility against different browsers, web servers & operating systems.<br /><br />Case Study2 The approach taken while testing the compatibility of variety of sensors, different versions of OPERATING SYSTEMS, browser, modem speeds, firewall settings & LAN settings for a web site, which uses fingerprints for authentication.<br /><br />Case Study3 This addresses the testing of compatibility on different hardware and software.<br />A video mail application was tested for compatibility of OPERATING SYSTEMS, Browser &hardware such as Cameras and capture cards.<br /><br />Software Compatibility Testing can be defined as verification whether the relevant software interacts with and shares information properly with other software and hardware combinations. Thus this test plays an important role in case any application/web site is required to run on different platforms with different software and hardware components. Basically Compatibility Testing answers to the questions like:<br /><br /> Whether the selected software package will operate on user’s machines or operating systems?<br /> Whether it is compatible with other computer programs & existing data files?<br /> Which models of device and other manufacturers of the similar devices are the tested application is compatible?<br /> Which version of the software is compatible with application to be tested?<br /><br /><br /><br /><br /><br /><br />7.1 Process for Compatibility<br />The general process of compatibility testing is as explained below:<br />Planning:<br />This is the first step in compatibility testing. It includes:<br /> Identification of machines required for compatibility testing<br /> Inform the systems department for arranging the required operating systems, browsers and other hardware/software components.<br /> Prepare the formats of test report and defect report and submit the same to the customer for their comments/approval.<br /> Study the application and list down the queries to be confirmed with the customer.<br /> Allocate hardware & software combinations to team members.<br />7.2 Identification of Parameters:<br />The second step in the process of compatibility testing is identifying the parameters. Some of the important parameters are explained below:<br />Hardware:<br />One can consider different configuration possibilities for a standard Windows-based PC used in business. Most PCs are modular and built up from various system boards, component cards and other internal devices such as disk drives, CD-ROM drives, video, sound, modem and network cards and peripherals like printers, scanners, mice, keyboards, monitors, cameras, joysticks and other devices that plug into your system and operate externally to the PC.<br /><br />Operating Systems:<br />Operating systems and how application interacts with it will affect the appearance and functionality of an application. The most commonly used platforms are DOS, Windows, Mac OS, UNIX, and VMS & Linux.<br />Operating systems with all required components should be properly installed on corresponding machines before proceeding with the compatibility testing.<br /><br />Browsers:<br />Browser compatibility test is applicable for web-based applications. Commonly used browsers are IE & Navigator. Some of the features in the application may or may not run on different browsers. E.g.: Frames can be displayed in IE but can not displayed in some versions of Navigator & Marquee effect can not be seen in some versions of IE. The browser versions may or may not run on all the platforms. e.g: IE 4.01 cannot be loaded with Win2000 Server.<br /><br />Web Servers:<br />A web sever is a program that using the client/server model and WWW’s HTTP servers the files that form web pages to web users. Popular web servers are IIS, Apache, I-planet, Novell and Lotus Domino etc. These different web servers are compatible with different platforms.<br />E.g.: IIS for Windows NT Server, Apache for Unix & Lotus Domino for OS/390.<br /><br /><br /><br /><br />Databases:<br />This parameter comes into the picture when a single product supports multiple RDBMS. As for every RDBMS, there will be changes in stored procedures, triggers etc. Hence it becomes essential to verify compatibility of an application with multiples RDBMS that it supports.<br /><br />Plug-in:<br />Plug-in applications are programs that can be installed and used as part of the parent application thus enhancing the functionality. The plug-in software is recognized automatically by the parent application and its functionality is integrated into. Amongst popular plug-ins is Adobe’s Acrobat, which allows pdf files to be read on a browser.<br />7.3 Combination of Parameters to Test<br /><br />Matrix should be formed with suitable combinations of the above mentioned parameters before starting for testing. However, the following points have to be considered in the course of preparing the matrix:<br /><br /> Availability of hardware/software. If the same is not available then they can not be included even if it is specified in test requirements.<br /> Criticality of a particular parameter for the application to be tested. e.g. Is it required to consider modem speeds as a parameter in matrix?<br /> The relative importance of different values of a parameter. e.g. Is it required to consider lower versions of operating systems and browsers<br /> Priority to be given for crucial parameters and their values, if enough time is not available because of tight project schedule. e.g. Give priority to Primary OPERATING SYSTEM (Win NT4.0 sp4) than Secondary OPERATING SYSTEM (Win NT4.0 sp5)<br /><br />7.4 Test Case Identification:<br /><br />Generally compatibility test is executed only after completion of functional testing. Hence test cases for compatibility testing are to be selected from available functional test case suite.<br />Following points are to be considered while selecting such test cases:<br /><br /> Test cases for core functionality shall be selected. i.e. Test cases indicated with showstopper and some of the ‘High’ prioritized test cases to be selected.<br /> Test cases selected shall be specific to the parameters indicated in matrix E.g. : File –Open / Print for OPERATING SYSTEM compatibility and Down load Plug In for Web server compatibility etc.<br /> The test cases resulting in defects in functional testing need to be selected for compatibility testing; as such a test case may not result in a defect with some other combinations of parameters.<br /><br />7.5 Setup Management<br /><br />After identifying the parameters and test cases the hardware/software required for compatibility testing shall be organized in a separate lab. Following points are to be considered while setting up the lab:<br /><br /> Connect all identified machines with a separate network.<br /> Machines are equipped with necessary software / hardware combinations.<br /> All other applications & data should be removed from these machines as it will be subjected to frequent loading & unloading of operating systems.<br /> Database & web server shall be properly configured with application.<br />7.6 Ensuring Pre-requisites<br />Following pre-requisites have to be ensured before execution of a compatibility test<br /> Test Report format shall be ready & it shall be approved by customer if applicable.<br /> Defect report format shall be in place.<br /> Lab setup should be ready in all respects as indicated in setup management.<br /> The application should be functionally stable and performance acceptable on one configuration.<br /> The application to be tested should be properly installed, along with all hardware/software required by it.<br />7.7 Execution of Test:<br /><br />Execution of test includes actual testing assuming that all things are available in place.<br />Operating systems and browses shall be loaded in proper sequence to avoid problems in changing the same.<br /> Selected test cases are to be executed as per matrix and results are to be entered in test report.<br /> Defects shall be properly identified & they shall be verified before reporting. If required screen shot shall be taken to support such defects.<br /> Rotation of hardware – software after each round.<br /> Closure report to be prepared indicating the activities performed during testing and test results analysis.<br /><br /><br /><br /><br /><br /><br /><br /><br />Following is the OS/Browser checklist which can be followed while performing Compatibility Testing<br /><br />OS Browsers<br /> IE 4 IE 5 IE 5.5 IE 6.0 NS 4.08 NS 4.51 NS 4.77 NS 6.0<br />Win95 <br />Win98SE <br />Win ME <br />WinNT4Enterprise sp6 <br />Win 2000 Server <br />Win 2000 Advance Server <br />Win 2000 Professional <br />Win XP Professional <br />Win XP Home <br />Win 2003 Server <br />Mac OS 8.6 <br />Mac OS 9.2 <br />Mac OS X <br /><br />Also, to be noted are a few differences between IE and NS browsers. Following are the same:<br /><br />Microsoft Internet Explorer Netscape Navigator<br />About 80% of internet users use MSIE (Microsoft Internet Explorer). About 20% of internet users use Netscape.<br />MSIE is more stable and has less bugs. Netscape supports more standards than MSIE.<br />It has features that make allow people to make nice looking web pages (page transitions, better looking tables and horizontal rules, favicons.) But lacks support of some basic standards. Netscape tends to crash more often than Internet Explorer.<br />Other more extreme features (like the calendar and gradients in v5.5) make web pages easy to make and pretty but real web designers can't use them because they won't work on Netscape. Or designers are forced to make two versions of their site - one for MSIE and one for Netscape. Netscape lets you feel good because you aren't using Microsoft software.<br /> Netscape's browser is essentially open source through mozilla.org.<br /> Netscape and AOL have been teaming up on things like AOL instant messenger.<br /> There seems to be fewer security problems with Navigator than MSIE.<br /><br />Following are the tools which were used for ghosting Operating systems for compatibility testing:<br /><br />1. VM Ware images created using this <br />2. Ghosting of images using ghost floppy<br />3. Ghosting of images using a Menu server where the images have already been store<br />4. Ghosting of images of Macintosh Operating system using Carbon Cloner.<br />7.8 What is Ghosting ?<br /><br />Ghost imaging, using ghosting software is a method of converting the contents of a hard drive -- including its configuration settings and applications -- into an image, and then storing the image on a server or burning it onto a CD. <br />When contents of the hard drive are needed again, ghosting software converts the image back to original form. <br />Companies use ghost imaging when they want to create identical configurations and install the same software on numerous machines. For instance, if a company needs to dole out 100 laptops to its employees, then instead of manually setting configurations, and installing applications on each machine, ghosting software (usually contained on a floppy ) will retrieve the ghost image from the server, convert it into its original form, and copy it onto the laptop.<br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br />8.0 Acceptance <br />We have differentiated the Entry & Exit Criteria for each phase of testing<br />(a) Requirements Phase ( Study of the Project / Product)<br />(b) Planning Phase (Planning the Testing Activity) <br />(c) Design Phase (Creation of all Test Plans / Test Cases etc)<br />(d) Execution Phase (Execution of Test Cases based on testing specific needs , Bugs Reporting etc)<br />8.1 Acceptance Criteria for Requirements Phase<br />Phase Requirements Phase<br />Entry Criteria Customer-signed Statement of Work (SOW)<br /><br />Exit Criteria Kickoff Meeting has occurred<br />Project Acceptance Criteria have been identified and documented<br />8.2 Acceptance Criteria for Planning Phase<br />Phase Planning Phase<br />Entry Criteria Project Acceptance Criteria<br />SOW - Customer-signed<br />Exit Criteria Completed Project Schedule (e.g. Microsoft Project) – for PMP<br />Identify Project Risks – for PMP<br />Creation of List of HW/SW and other Resources – for PMP<br />Project Management Plan (PMP)<br />8.3 Acceptance Criteria for Design Phase<br />Phase Design Phase <br />Entry Criteria Preparation Project Plan / Test Plan <br />Preparation of Test Cases / Test Scenarios<br />Exit Criteria Completion of Test Plans, Test Cases, Test Scenarios<br />8.4 Acceptance Criteria for Execution Phase<br />Life Cycle Phase System / Performance Testing Phase – Execution Phase<br />Entry Criteria Source Code – System Test Baseline<br />Base Lined System Test Plan<br />Properly configured Test Environment<br />Exit Criteria Execute Test Cases, Report Bugs, Perform Test Re-test for Bugs fixed <br /><br />9.0 Manual vs. Automation <br />9.1 Manual Testing<br />Definition: That part of software testing that requires human input, analysis, or evaluation.<br /><br /> In an ideal world, software testing is 100% automated. Testers simply push a button, the tests execute automatically, and manual effort is limited to reviewing the results of the automation.<br /> In reality, manual testing in always a part of any testing effort. During the initial phases of software development, manual testing is performed until the software and its user interface are stable enough that beginning automation makes sense. Often, aggressive development cycles and release dates do not allow for the time required to design and implement automated tests. Also, for one-time development efforts or products with short lifetimes, manual testing often is the only sensible option from time and budget standpoints.<br /> If the product is being tested through a GUI (graphical user interface), and your automation style is to write scripts (essentially simple programs) that drive the GUI, an automated test may be several times as expensive as a manual test.<br /> If you use a GUI capture/replay tool that tracks your interactions with the product and builds a script from them, automation is relatively cheaper. It is not as cheap as manual testing, though, when you consider the cost of recapturing a test from the beginning after you make a mistake, the time spent organizing and documenting all the files that make up the test suite, the aggravation of finding and working around bugs in the tool, and so forth. Those small "in the noise" costs can add up surprisingly quickly.<br /> If you’re testing a compiler, automation might be only a little more expensive than manual testing, because most of the effort will go into writing test programs for the compiler to compile. Those programs have to be written whether or not they’re saved for reuse.<br /><br />9.2 Automation Testing<br /><br />Definition: Automated tests execute a sequence of actions without human intervention. This approach helps eliminate human error, and provides faster results. <br /><br /> Since most products require tests to be run many times, automated testing generally leads to significant labor cost saving over time. Typically a company will pass the break-even point for labor costs after just two or three runs of an automated test.<br /> Testing may seem like just a set of actions, but good testing is an interactive cognitive process. That’s why automation is best applied only to a narrow spectrum of testing, not to the majority of the test process.<br /> Humans are good at noticing oddities; they’re bad at painstaking or precise checking of results. If bugs lurk in the 7th decimal place of precision, humans will miss it, whereas a tool might not. Tools are not limited to looking at what appears on the screen; they can look at the data structures that lie behind it.<br /> The fact that humans can’t be precise about inputs means that repeated runs of a manual test are often slightly different tests, which might lead to discovery of a support code bug. For example, people make mistakes, back out, and retry inputs, thus sometimes stumbling across interactions between error-handling code and the code under test.<br /> Configuration testing argues for more automation. Running against a new OS, device, 3rd party library, etc., is logically equivalent to running with changed support code. Since you know change is coming, automation will have more value. The trick, though, is to write tests that are sensitive to configuration problems - to the differences between OSes, devices, etc. It likely makes little sense to automate your whole test suite just so you can run it all against multiple configurations.<br /> It’s annoying to discover a bug in manual testing and then find you can’t reproduce it. Probably you did something that you don’t remember doing. Automated tests rarely have that problem (though sometimes they’re dependent on parts of the environment that change without your noticing it). Rudimentary tracing or logging in the product can often help greatly - and it’s useful to people other than testers.<br /> An automated test suite can explore the whole product every day. A manual testing effort will take longer to revisit everything. So the bugs automation does find will tend to be found sooner after the incorrect change was made.<br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br />9.3 Flow diagram to determine Test scenarios for Manual and Automation <br /><br /> <br /><br />9.4 Summary of Manual Vs Automated Testing <br /><br /> Pros Cons<br />Manual Testing Test case creation is quick and inexpensive<br /> Better at simulating real world use.<br /> Better at analyzing test results and exploring new test cases.<br /> Best when test steps and results are not well – defined.<br /> Best when extensive analysis is required.<br /> Does not require a technically trained staff Might not be consistent in rerunning test cases.<br /> Rerunning large volumes of test cases is expensive<br />Automated Testing Can execute test cases unattended.<br /> Can cost-effectively run large volumes of test cases.<br /> Can cost – effectively rerun large volumes of test cases repetitively. Test case creation is expensive.<br /> Test case creation is time consuming.<br /> Maintaining test cases can be expensive.<br /> Requires a highly technical staff.<br /> Rerunning old tests is not likely to find new bugs.<br />9.5 Why & When One Should Consider Automation <br />Definition: Automated tests execute a sequence of actions without human intervention. This approach helps eliminate human error, and provides faster results. <br />If testing is a means to the end of understanding the quality of the software, automation is just a means to a means.<br />Few reckless assumptions of automated testing <br /><br />Reckless Assumption #1 – Testing is a “sequence of actions”<br /> A more useful way to think about testing in as a sequence of interactions interspersed with evaluations. Some of those interactions are predictable, and some of them can be specified in purely objective terms. However, many others are complex, ambiguous, and volatile. Although it is often useful to conceptualize a general sequence of action that comprise a given test, if we try to reduce testing to a rote series of actions the result will be a narrow and shallow set of tests.<br /> Manual testing, on the other hand, is a process that adapts easily to changes and can cope with complexity. Testing may seem like just a set of action, but good testing in an interactive cognitive process. That’s why automation is best applied only to a narrow spectrum of testing, not to the majority of the test process. If you set out to automate all the necessary test execution, you’ll probably spend a lot of money and time creating relatively weak tests that ignore many interesting bugs, and find many “problems” that turn out to be merely unanticipated correct behavior.<br /><br />Reckless Assumption #2 – Testing means repeating the same actions over and over.<br /> Once a specific test case is executed a single time, and no bug is found, there is little chance that the test case will ever find a bug, unless a new bug is introduced into the system. If there is variation in the test cases, though, as three is a greater likelihood of revealing problems both new and old. Variability is one of the great advantages of hand testing over script and playback testing.<br />Highly repeatable testing can actually minimize the chance of discovering all the important problems, for the same reason stepping in someone else’s footprints minimizes the chance of being blown up by land mine.<br /><br />Reckless Assumption #3 – We can automate testing actions.<br />Some tasks that are easy for people are hard for computers. Probably the hardest part of automation is interpreting test results. For GUI software, it is very hard to automatically notice all categories of significant problems while ignoring the insignificant problems. The problem of automat ability is compounded by the high degree of uncertainty and change in a typical innovative software project.<br />Even if we have a particular sequence of operations that can in principle sequence of operations that can in principle be automated, we can only do so if we have an appropriate tool for the job. Information about tools is hard to come by, though, and the most critical aspects of a regression test tool are impossible to evaluate unless we create or review as industrial size test suite using the tool. Here are some of the factors to consider when selecting a test tool.<br />• Capability: Does the tool have all the critical features we need, especially in the area of test result validation and test suite management?<br />• Reliability: Does the tool work for long periods without failure, or is it full of bugs? Many test tools are developed by small companies that do a poor job of testing them.<br />• Capacity: Beyond the toy examples and demos, does the tool work without failure in an industrial environment? Can it handle large scale test suites that run for hours or days and involve thousands of scripts?<br />• Performance: Is the tool quick enough to allow a substantial saving in test development and execution time versus hand testing.<br />• Compatibility: Does the tools work with the particular technology that we need to test?<br />• Non-Intrusiveness: How well does the tool simulate an actual user? Is the behavior of the software under test the same with automation as without?<br /><br />Reckless Assumption #4 – An automated test is faster, because it needs no human intervention.<br />All automated test suites require human intervention, if only to diagnose the results and fix broken tests. It can also be surprisingly hard to make a complex test suite run without a hitch. Common culprits are changes to the software being tested, memory problems, file system problems, network glitches, and bugs in the test tool itself.<br /><br />Reckless Assumption #5 - Automation reduces human error. <br />Yes, some errors are reduced. Namely, the ones that humans make when they are asked carry out a long list of mundane mental and tactile activities. But other errors are amplified. Any bug that goes unnoticed when the master compare files are generated will go systematically unnoticed every time the suite is executed. Or an oversight during debugging could accidentally deactivate hundreds of tests. <br /><br /><br /><br />Reckless Assumption #6 – We can quantify the costs and benefits of manual vs. automated testing.<br />The truth is, hand testing and automated testing are really two different processes, rather than two different ways to execute the same process. Their dynamics are different, and the bugs they tend to reveal are different. Therefore, direct comparison of them in terms of dollar cost or number of bugs found is meaningless. <br /><br />Reckless Assumption #7 – Automation will not harm the test project.<br />The last most thorny of all the problems that we face in pursuing an automation strategy: it’s dangerous to automate something that we don’t understand. If we don’t get the test strategy clear before introducing automation, the result of test automation will be a large mass of test code that no one fully understands. As the original developers of the suite drift away to other assignments, and others take over maintenance, the suite gains a kind of citizenship in the test team. The maintainers are afraid to throw any old tests out, even if they look meaningless, because they might later turn out to be important. So, the suite continues to accrete new tests, becoming an increasingly mysterious oracle, like some old Himalayan guru or talking oak tree from a Disney movie. No one knows what the suite actually tests, or what it means for the product to “pass the test suite” and the bigger it gets, the less likely anyone will go to the trouble to find out.<br />9.6 A Sensible Approach to Automation<br />• Maintain a careful distinction between the automation and the process that it automates. The test process should be in a form that is convenient to review and those maps to the automation.<br />• Think of your automation as a baseline test suite to be used in conjunction with manual testing, rather than as a replacement for it.<br />• Carefully select your test tools. Gather experiences from other testers and organizations. Try evaluation versions of candidate tools before you buy.<br />• Put careful thought into buying or building a test management harness. A good test management system can really help make the suite more reviewable and maintainable.<br />• Assure that each execution of the test suite results in a status report that includes what tests passed and failed versus the actual bugs found. The report should also detail any work done to maintain or enhance the suite. I’ve found these reports to be indispensable source material for analyzing just how cost effective the automation is.<br />• Assure that the product is mature enough so that maintenance costs from constantly changing tests don’t overwhelm any benefits provided.<br /><br />10.0 Unit summary<br />In this session we have learnt:<br />1. The following testing types<br />i. Structural VS Functional Testing<br />ii. Regression Testing<br />iii. Performance Testing<br />iv. Installation Testing<br />v. Compatibility Testing<br />vi. Acceptance Testing<br />vii. Manual VS Automation Testing<br /><br />10.1 Exercise<br />Answer the following in Short.<br />2. List various types of testing under Black box and White box testing?<br />3. Draw the flow chart for installation testing?<br />4. Define and explain Performance Testing in few lines?<br />5. What is the purpose of compatibility testing and what is the approach for the same?<br />6. Given a product, how do you decided, the type of testing to be performed?Rajesh Babu Rajamanickamhttp://www.blogger.com/profile/16036650026261051481noreply@blogger.com0tag:blogger.com,1999:blog-8180223062941193498.post-85114955578367914822008-01-14T16:57:00.000-08:002008-01-14T16:58:19.924-08:00Test Case Authoring<span style="font-weight:bold;">Test Case Authoring</span><br /><br />1.0<br /><br /><br /><br /><br /><br /><br />Table of Contents<br /><br /> <br />1.0 General 1<br /> Module Objectives 1<br /> Module Structure: 1<br />2.0 Test Cases 2<br /> Attributes of a good test case 2<br /> Most Common mistakes while writing Test Cases 3<br />3.0 Authoring test cases using functional specifications 4<br />4.0 Authoring Test Cases Using Use Cases 5<br /> Advantages of Test Cases derived from Use Cases 5<br />5.0 Test case Management and Test Case authoring tools 7<br />6.0 Test Case Authoring Tools 8<br /> Mercury Interactive’s Test Director 8<br /> Features & benefits of Test Director 8<br /> Applabs Test Link 9<br />7.0 Test Case Coverage 11<br />8.0 Unit Summary 12<br /> Exercise 12 <br /> <br />1.0 General<br />Test case authoring is one of the complex and time-consuming activities for any test engineer.<br />The progress of the project depends on the quality of the test cases written. The test engineers need to take utmost care while developing the test cases and must ensure that they follow the standard rules of test cases authoring, so that they are easy to understand and implement.<br />The following module aims at providing an insight into the fundamentals of test case authoring and the techniques to be adopted while authoring such as writing test cases based up on functional specifications or using use cases.<br />1.1 Module Objectives<br />At the end of the this module, you should be able to:<br /> Define test cases<br /> Understand the process of developing/authoring cases<br /> Understand test case Management using some tools.<br /> Understand Testcase Coverage.<br />1.2 Module Structure:<br /><br />S.no Topic Duration in Hrs<br />1 Over View 1<br />2 Test cases with Functional Specifications 2<br />3 Test cases with Use Cases 2<br />4 Test Case Authoring Tools 2<br />5 Test Case Coverage 1<br /> Total Duration 8<br /><br /><br /><br /><br /><br /><br /><br />2.0 Test Cases<br />Definition: A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly. A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results. <br />2.1 Attributes of a good test case<br /><br />Accurate. They test what their descriptions say they will test.<br />It should always be clear whether the tester is doing something or the system is doing it. If a tester reads, "The button is pressed," does that mean he or she should press the button, or does it mean to verify that the system displays it as already pressed? One of the fastest ways to confuse a tester is to mix up actions and results. To avoid confusion, actions should always be entered under the ‘Steps’ column, and results under the ‘Results’/’Expected Results’ column. What the tester does is always an action. What the system displays or does is always a result.<br />Economical.<br />Test Cases should have only the steps or fields needed for their purpose. They should not give a guided tour of the software.<br />How long should a test case be? Generally, a good length for step-by-step cases is 10-15 steps. There are several benefits to keeping tests this short:<br />It takes less time to test each step in short cases<br />The tester is less likely to get lost, make mistakes, or need assistance.<br />The test manager can accurately estimate how long it will take to test<br />Results are easier to track<br /><br />We should not try to cheat on the standard of 10-15 steps by cramming a lot of action into one step. A step should be one clear input or tester task. You can always tag a simple finisher onto the same step such as click <OK> or press <Enter>. Also a step can include a set of logically related inputs. You don't have to have a result for each step if the system doesn't respond to the step.<br />2.1.1.1 Repeatable, self-standing<br /> A test case is a controlled experiment. It should get the same results every time no matter who tests it. If only the writer can test it and get the result, or if the test gets different results for different testers, it needs more work in the setup or actions.<br />2.1.1.2 Appropriate<br />A test case has to be appropriate for the testers and environment. If it is theoretically sound but requires skills that none of the testers have, it will sit on the shelf. Even if you know who is testing the first time, you need to consider down the road -- maintenance and regression.<br /><br /><br /><br /><br /><br />2.1.1.3 Traceable<br />You have to know what requirement the case is testing. It may meet all the other standards, but if its result, pass or fail, doesn't matter, why bother?<br />The above list is comprehensive but not exhaustive. Based on individual requirements, the more standards may be added to the above list.<br />2.2 Most Common mistakes while writing Test Cases<br />In each writer's work, test case defects will veer around certain writing mistakes. If you are writing cases or managing writers, don't wait until cases are all done before finding these mistakes. Review the cases every day or two, looking for the faults that will make the cases harder to test and maintain. Chances are you will discover that the opportunities to improve are clustered in one of the seven most common test case mistakes:<br /><br />1. Making cases too long<br />2. Incomplete, incorrect, or incoherent setup<br />3. Leaving out a step<br />4. Naming fields that changed or no longer exist<br />5. Unclear whether tester or system does action<br />6. Unclear what is a pass or fail result<br />7. Failure to clean up<br /><br /><br /><br /><br /><br /><br /><br /><br />3.0 Authoring test cases using functional specifications<br />This means writing test cases for an application with the intent to uncover nonconformance with functional specifications. This type of testing activity is central to most software test efforts as it tests whether an application is functioning in accordance with its specified requirements. Additionally, some of the test cases may be written for testing the nonfunctional aspects of the application, such as performance, security, and usability.<br />The importance of having testable, complete, and detailed requirements cannot be overemphasized. In practice, however, having a perfect set of requirements at the tester's disposal is a rarity. In order to create effective functional test cases, the tester must understand the details and intricacies of the application. When these details and intricacies are inadequately documented in the requirements, the tester must conduct an analysis of them.<br />Even when detailed requirements are available, the flow and dependency of one requirement to the other is often not immediately apparent. The tester must therefore explore the system in order to gain a sufficient understanding of its behavior to create the most effective test cases.<br />Effective test design includes test cases that rarely overlap, but instead provide effective coverage with minimal duplication of effort (although duplication sometimes cannot be entirely avoided in assuring complete testing coverage). Apart from avoiding duplication of work, the test team should review the test plan and design in order to:<br />• Identify any patterns of similar actions or events used by several transactions. Given this information, test cases should be developed in a modular fashion so that they can be reused and recombined to execute various functional paths, avoiding duplication of test-creation efforts.<br />• Determine the order or sequence in which specific transactions must be tested to accommodate preconditions necessary to execute a test procedure, such as database configuration, or other requirements that result from control or work flow. <br />• Create a test procedure relationship matrix that incorporates the flow of the test procedures based on preconditions and post conditions necessary to execute a test case. A test-case relationship diagram that shows the interactions of the various test procedures, such as the high-level test procedure relationship diagram created during test design, can improve the testing effort.<br />Another consideration for effectively creating test cases is to determine and review critical and high-risk requirements by testing the most important functions early in the development schedule. It can be a waste of time to invest efforts in creating test procedures that verify functionality rarely executed by the user, while failing to create test procedures for functions that pose high risk or are executed most often. <br />3.1.1.1 To sum up<br /> Effective test-case design requires understanding of system variations, flows, and scenarios. It is often difficult to wade through page after page of requirements documents in order to understand connections, flows, and interrelationships. Analytical thinking and attention to detail are required to understand the cause-and-effect connections within the system intricacies. It is insufficient to design and develop high-level test cases that execute the system only at a high level; it is important to also design test procedures at the detailed, gray-box level.<br /><br /><br /><br />4.0 Authoring Test Cases Using Use Cases<br />A use case is a sequence of actions performed by a system, which combined together produce, a result of value to a system user. While use cases are often associated with object-oriented systems, they apply equally well to most other types of systems.<br />Use cases and test cases work well together in two ways: If the use cases for a system are complete, accurate, and clear, the process of deriving the test cases is straightforward. And if the use cases are not in good shape, the attempt to derive test cases will help to debug the use cases.<br />4.1 Advantages of Test Cases derived from Use Cases <br />Traditional test case design techniques include analyzing the functional specifications, the software paths, and the boundary values. These techniques are all valid, but use case testing offers a new perspective and identifies test cases which the other techniques have difficulty seeing.<br />Use cases describe the “process flows” through a system based on its actual likely use, so the test cases derived from use cases are most useful in uncovering defects in the process flows during real-world use of the system (that moment you realize “we can’t get there from here!”). They also help uncover integration bugs, caused by the interaction and interference of different features, which individual feature testing would not see. The use case method supplements (but does not supplant) the traditional test case design techniques. <br />4.1.1.1 What one should know before converting the Use cases into test cases?<br /> Business logic and terminologies of the vertical.<br /> Technical complexities and environment compatibilities of the application.<br /> Limitations of the application and its design.<br /> Software testing experience.<br />4.1.1.2 How to approach in deriving test cases from Use cases?<br /> Read and understand the objective of the use case.<br /> Identify the conditions involved in the use case.<br /> Identify the relations between the conditions within a use case.<br /> Identify the dependencies of a use case with another.<br /> Check the functional flow.<br /> If you suspect an issue in any manner, get it resolved from your client or designing team.<br /> Break down the positive and negative test scenarios from each condition.<br /> Collect test data for the identified scenarios.<br /> Prepare a high-level test case index with unique test id.<br /> If you have the prototype of the application, compare the test scenarios with the prototype and review the test case index.<br /> Convert the test scenarios into test cases.<br />The first step is to develop the Use Case topics from the functional requirements of the Software Requirement Specification. The Use Case topics are depicted as an oval with the Use Case name. See . The Diagram also identifies the Actors outside the system, and which participant initiates the action. <br /><br /><br /> <br /><br /> Figure 1: Use Case Diagram<br /><br />The Use Case diagram just provides a quick overview of the relationship of actors to Use Cases. The meat of the Use Case is the text description. This text will contain the following:<br /><br />Name<br />Brief Description<br />SRS Requirements Supported<br />Pre & Post Conditions<br />Event Flow<br />In the first iteration of Use Case definition, the topic, a brief description and actors for each case are identified and consolidated. In the second iteration the Event Flow of each Use Case can be flushed out. The Event Flow may be the personification and role-playing of the requirements specification. The requirements in the Software Requirement Specification are each uniquely numbered so that they may be accounted for in the verification testing. These requirements should be mapped to the Use Case that satisfies them for accountability.<br /><br />The Pre-Condition specifies the required state of the system prior to the start of the Use Case. This can be used for a similar purpose in the Test Case. The Post-Condition is the state of the system after the actor interaction. This may be used for test pass/fail criteria. <br /><br />The event flow is a description (usually a list) of the steps of the actor’s interaction with the system and the system’s required response. Recall that system is viewed as a black box. The event flow contains exceptions, which may cause alternate paths through the event flow. The following is an example of a Use Case for telephone systems.<br />5.0 Test case Management and Test Case authoring tools<br />Once the Test cases are developed they need to be maintained in the proper way to avoid confusion as which test cases are executed and which are not and which have been passed or failed. That is the status of the test cases needs to be maintained.<br />So, the most important activity to protect the value of test cases is to maintain them so that they are testable. They should be maintained after each testing cycle, since testers will find defects in the cases as well as in the software. <br />Test cases lost or corrupted by poor versioning and storage defeat the whole purpose of making them reusable. Configuration management (CM) of cases should be handled by the organization or project, rather than the test management. If the organization does not have this level of process maturity, the test manager or test writer needs to supply it. Either the project or the test manager should protect valuable test case assets with the following configuration management standards:<br /> Naming and numbering conventions<br /> Formats, file types<br /> Versioning<br /> Test objects needed by the case, such as databases<br /> Read only storage<br /> Controlled access<br /> Off-site backup<br />6.0 Test Case Authoring Tools<br />Improving productivity with test management software:<br />Software designed to support test authoring is the single greatest productivity booster for writing test cases. It has these advantages over word processing, database, or spreadsheet software:<br /> Makes writing and outlining easier<br /> Facilitates cloning of cases and steps<br /> Easy to add, move, delete cases and steps<br /> Automatically numbers and renumbers<br /> Prints tests in easy-to-follow templates<br />Test authoring is usually included in off-the-shelf test management software, or it could be custom written. Test management software usually contains more features than just test authoring. When you factor them into the purchase, they offer a lot of power for the price. If you are shopping for test management software, it should have all the usability advantages listed just above, plus additional functions:<br /> Exports tests to common formats<br /> Multi-user<br /> Tracks test writing progress, testing progress<br /> Tracks test results, or ports to database or defect tracker<br /> Links to requirements and/or creates coverage matrixes<br /> Builds test sets from cases<br /> Allows flexible security<br />There are many test case authoring tools available in the market today. Here we will limit the discussion to Mercury’s Test Director and the in-house tool TestLinks.<br />6.1 Mercury Interactive’s Test Director <br />By far the most familiar tool for maintaining Test Cases is Mercury Interactive’s Test Director. Test Director helps organizations deploy high-quality applications more quickly and effectively. It has four modules—Requirements Manager, Test Plan, Test Lab and Defects Manager. These allow for a smooth information flow between various testing stages. The completely web-enabled Test Director supports high levels of communication and collaboration among distributed testing teams.<br />6.1.1 Features & benefits of Test Director<br />Supports the Entire Testing Process Test Director incorporates all aspects of the testing process— requirements management, planning, scheduling, running tests, issue management and project status analysis—into a single browser-based application. <br />6.1.1.1 Provides Anytime, Anywhere Access to Testing Assets<br />Using Test Director’s Web interface, testers, developers and business analysts can participate in and contribute to the testing process by collaborating across geographic and organizational boundaries. <br />6.1.1.2 Provides Traceability Throughout the Testing Process<br />Test Director links requirements to test cases, and test cases to issues, to ensure traceability throughout the testing cycle. When requirement changes or the defect is fixed, the tester is notified of the change. <br />6.1.1.3 Integrates with Third-Party Applications<br />Whether you're using an industry standard configuration management solution, Microsoft Office, or a homegrown defect management tool, any application can be integrated into Test Director. Through its open API, Test Director preserves your investment in existing solutions and enables you to create an end-to-end lifecycle-management solution. <br />6.1.1.4 Manages Manual and Automated Tests<br />Test Director stores and runs both manual and automated tests, and can help jumpstart your automation project by converting manual tests to automated test scripts. <br />6.1.1.5 Accelerates Testing Cycles<br />Test Director’s Test Lab Manager accelerates the test execution cycles by scheduling and running tests automatically—unattended, even overnight. The results are reported into Test Director’s central repository, creating an accurate audit trail for analysis.<br />6.1.1.6 Facilitates Consistent and Repetitive Testing Process<br />By providing a central repository for all testing assets, Test Director facilitates the adoption of a more consistent testing process, which can be repeated throughout the application lifecycle or shared across multiple applications or lines of business (LOBs). <br />6.1.1.7 Provides Analysis and Decision Support Tools<br />Test Director’s integrated graphs and reports help analyze application readiness at any point in the testing process. Using information about requirements coverage, planning progress, run schedules or defect statistics, QA managers are able to make informed decisions on whether the application is ready to go live. <br />6.2 Applabs Test Link<br />Though not having as many features as Test Director, the TestLinks tool developed by Applabs is a simple and effective tool in maintaining the Test plans and test cases. It is developed using PHP and MySQL.<br /><br />The various options provided in this tool are <br />Product Management – Here the user can Create, Edit, Delete products.<br /><br />Test Case Management – Here the user can Create, Edit, Delete Test Cases. He has an option to search Test cases as well as a print option to have a hard copy of the test cases developed.<br /><br />Test Plan Management – Here the user plans the test effectively with the following options<br /><br />Creating, Editing, Deleting Test plans<br />Linking test cases to test plan (Import (smartlink) into a Test plan)<br />Defining User/Project rights <br />Deleting Test cases<br /><br />Keyword Management - Here the user can Create, Edit, Delete Key words. And can assign single keyword to multiple cases.<br /><br />Execution status - Here the status of the user can have various reports based on the test cases status (i.e. which have been passed or failed or blocked) or on the basis of build etc. In this section the user can Create, Edit, Delete Milestones and also can set the risk, importance, and owner of each category.<br /><br />Test Case Execution – This option allows the user to execute test cases by either components or their category levels and also allows the user to create the new build. Print option is provided to have the hard copy of Test plan and test cases.<br /><br />User Administration – Under this section the user can create new login to the tool or can modify the existing user details. <br />7.0 Test Case Coverage<br />Test coverage is about insuring that test plans and test cases include information vital for successful testing of the program in the areas of functionality, performance, and the overall quality of the software. In addition, test managers who prepare test plans that provide proper test coverage can avoid the wrath of a project manager whose implementation has just gone sour or an angry customer whose system has just crashed. <br /><br />(Note that test coverage is not the same thing as code coverage. Code coverage measures how the tests have exercised the code, e.g., which lines of code have never been executed. But you can exercise every line of code and still miss something important—like figuring out that the program doesn’t work at all on Windows 2000.)<br /><br />Test coverage requires information—about how the program installs, how fast the program accesses and processes data, and how the program appears on the monitor. These are just a few examples of the kinds of things a tester needs to know about a program’s functionality and performance in order to provide appropriate test coverage. Here we talk about gathering and using that information. <br /><br />However, gathering information about a program just for the sake of collecting information does not improve your test coverage. Adequate test coverage involves a systematic approach that includes analyzing the available documentation for use in test planning, execution, and review. In order to come up with a successful strategy to improve test coverage, you’ll need to do three things:<br /><br />1. Create a plan of attack to provide strong test coverage<br />2. Determine the scenarios for the test plan<br />3. Manage the changes made to information used by testing <br /><br />Implementing this strategy requires that you, the test manager, think about people as much as about documents—taking into account the interests of the rest of the project team, and using people skills to encourage cooperation between teams. To this end, part of the manager’s role is to serve as “knowledge manager”—demonstrating how to share and store the team’s knowledge, and then using that knowledge to improve the organization’s methods.<br /><br />8.0 Unit Summary<br />In this session we have learnt:<br />1. The process of authoring Test cases.<br />2. Authoring Test cases using functional Specifications and Use cases.<br />3. Test case management and Test case authoring tools.<br /><br />8.1 Exercise<br /><br />Answer the following in short.<br />1. What is “test case authoring” and what are the attributes of a good test case?<br />2. How do you derive test cases from use cases?<br />3. Write a test case to test the login screen of a mail system?<br />4. Write a test case for testing a “user details” interface that accepts the users personal details (you can make some assumptions and limitations)?Rajesh Babu Rajamanickamhttp://www.blogger.com/profile/16036650026261051481noreply@blogger.com2tag:blogger.com,1999:blog-8180223062941193498.post-40712477892983282862008-01-14T16:48:00.000-08:002008-01-14T16:49:40.266-08:00A Maturity Model for Automated Software Testing<span style="font-weight:bold;">SOFTWARE <br />A Maturity Model for Automated Software Testing </span><br />Mitchel H. Krause <br />Aside from their mandate to provide a safe and reliable product, manufacturers of computerized medical devices may have three very practical reasons for automating their software testing program: their product is too complicated to test manually, the time devoted to manual testing is cutting into potential profits, and current FDA requirements will be easier to satisfy with automated testing and documentation. If any of these factors motivates your company, this article will help you to sort out the issues to be considered and options available. Then, when the automated test program is in place, safer and more reliable products will follow.1 The sorting instrument presented is a maturity model that plots four levels of testing maturity in terms of the resources required to move from one level to the next. The model can be used to determine the level that best fits your company and its products. <br />THE SOFTWARE TESTING MATURITY MODEL <br />The software testing maturity model, shown in Figure 1, is similar to a software process maturity model that is familiar to many software engineers. It has been described by Watts S. Humphrey in his book Managing the Software Process,2 and has been cited by Frank Houston, a former FDA staffer, and Steven Rakitin in presentations to the Health Industry Manufacturers Association.3,4 The version shown here as Figure 2 is adapted from Rakitin's presentation. The process model adapts well to automated software testing because effective software verification and validation programs grow out of development programs that are well planned, executed, managed, and monitored. A good software test program cannot stand alone; it must be an integral part of the software development process. <br />Level 1: Accidental Automation. The first level of the software testing model--like level 1 in the software process model-- is characterized by ad hoc, individualistic, chaotic attempts to get the job done. Important information (for example, what to test) is not documented and must be extracted from in-house experts. Test plans are sketchy. Test results are not documented consistently. Schedules slip. Either products are delayed or testing becomes a cursory, poorly documented exercise. Management is uninvolved or uninformed. <br />This level has been designated Accidental Automation because the use of any automated tools or techniques comes about almost as if by accident and is not supported by process, planning, or management functions. Products released on the basis of such testing may well be accidents waiting to happen. Testing at this level may be appropriate only for a product that has no potential for harming the patient or user; it is never appropriate for a computerized medical device. <br />Level 2: Beginning Automation. The second testing level corresponds directly to Level 2¬Repeatable in the software process maturity model (see Figure 2). There are hundreds of capture-and-replay test tools on the market today that simply repeat the responses of a system under test.5 As in the process model, however, these tools have limited capabilities and lose their economic usefulness quickly as a product changes. <br />Level 2 testing is still dependent on information locked in the minds of in-house experts, although documentation is beginning to appear in the form of software requirements specifications (SRSs) and test requirements specifications (TRSs). However, in most cases, large portions of these documents are written after the fact and used to meet regulatory requirements rather than to direct the development and test processes. Writing them does, however, provide good practice for moving to level 3. <br />Level 3: Intentional Automation. At the third level, automated testing becomes both well defined and well managed. The TRSs and the test scripts themselves proceed logically from the SRSs and design documents. Furthermore, because the test team is now part of the development process, these documents are written before the product is delivered for testing. Consequently, schedules become more reliable. Level 3 is appropriate for many medical device manufacturers. <br />Level 4: Advanced Automation. The highest testing maturity level is a practiced and perfected version of level 3 with one major addition: postrelease defect tracking. Defects are trapped and sent directly back through the fix, test creation, and regression test processes. The software test team is now an integral part of product development, and testers and developers work together to build a product that will meet test requirements. Any software bugs that do occur are caught early, when they are much less expensive to fix. When testing is performed at this level, an FDA inspector can pick up any piece of product documentation and trace the development process all the way from the SRS that describes the feature to the test results that validate it. <br />A Checklist of Issues. How can these software testing maturity levels help a company to plan and implement an automated software test program? The answer to that question comes from careful consideration of four issues: <br />* What is the profile of your company and its products? <br />* What processes do you need to implement as part of an automated testing program? <br />* What kind of people do you need in order to create and run a testing program? <br />* Which automated software test products fit your profile and process? <br />Significantly, price is not on the list. That is because the cost of any one component, especially the test tool, becomes insignificant when it is compared with the potential payback. A well-planned and well-executed software test automation process will pay for itself many times over by ensuring fewer bugs and field fixes, shortening product development cycles, and providing labor savings. And, if you keep your ultimate goal in mind when defining processes, choosing staff, and buying test tools, your testing program will continue to yield a good return as you advance from one level of maturity to the next. <br />PROFILE: RANKING YOUR COMPANY'S PRODUCTS <br />Most computerized medical devices can benefit from some type of automated testing. In fact, Boris Beizer, who is probably the most well known expert in the field of software testing, has said, "As far as I'm concerned, manual testing is ludicrous and self-contradictory. It's based upon a fallacy. Anybody who thinks they can test manually, doesn't take into account the error rate in manual test execution."5 However, knowing what level of automation is appropriate requires a good understanding of your company's products. <br />The exercise described below will help you to create a test-level profile of your company and its products. The profile is a guide to how your company may benefit from an investment in processes, people, and automated software test products. The point scores at the end of each section provide a rough estimate of the level of software testing maturity you should strive to meet. <br />How Large Are Your Software Projects? As software projects increase in size, the resulting products become harder to test and at some point manual testing can no longer cover enough functionalities to ensure safe and reliable products. There are many ways to measure the scope of a software project, but a simple line count is a start: <br />* Score 1 if your product has fewer than 10,000 lines of code. <br />* Score 2 if your product has between 10,000 and 30,000 lines of code. <br />* Score 3 if your product has between 30,000 and 70,000 lines of code. <br />* Score 4 if your product has more than 70,000 lines of code. <br />How Complex Is Your Product? Systems with multiple inputs and outputs, graphics screens or printers, embedded processors, or multiple microprocessors are all candidates for the controlled sophistication of automated testing. If two or more interactive processors are used, the product probably presents integration and timing issues that cannot be tested manually. Similarly, if the product has an embedded processor, it may have functionalities that cannot be tested manually. In other cases, it may simply be impractical to test the system manually. Printers are one example of a common peripheral that is hard to test by hand. They not only accept commands and data from a software system, they also send back status and error signals to which the system must respond correctly. It is slow, inconvenient, and sometimes impossible for a tester to follow test plans that try to duplicate all the combinations of acknowledgment, system-busy, paper-out, baud rate, error, select, sensor, and other signals the printer might return. The input simulation provided by a sophisticated automatic test system can both speed up this process and make it traceable and reproducible. <br />Even testing of that seemingly ubiquitous input device, a keyboard or keypad, can benefit from using an automated test tool with simulation capabilities. Timing issues, especially, are nearly impossible to test manually. The fatal accidents in the mid 1980s that involved a radiation therapy machine are a good example of the kinds of problems that can occur. This particular machine had both therapy and diagnosis modes, and operators entered a series of keystrokes to switch the system from a high-energy to a low- energy mode. If the keystrokes were typed in too fast, however, the high-energy mode would remain in effect even though the operator would assume the change had been made. Later, when the system was activated, it sent a damaging and sometimes lethal dose of radiation into the patient.6 An automated test tool with simulation capabilities could have detected this problem early, before any harm was done. Keyboard simulations could have been set up to test the effect of varying keyboard input speeds. (The actual resolution of the problem involved many factors in addition to keyboard input speed; the report cited gives a full account of these accidents and their outcome.) <br />System outputs may also be tested more efficiently using automated methods. After an 8-, 10-, or 12-hour day, even the most conscientious human tester will fail to notice some errors or forget to document them. Other outputs either cannot be monitored manually or the testing may require nonintegrated measurement devices that may be difficult to set up and monitor. Finally, some potentially fatal software flaws may never show up during functional (black-box) testing. Detecting these problems requires an automated system that can use white-box test methods to look inside the system.1 <br />* Score 1 if your product has a single processor and simple inputs and outputs. <br />* Score 2 if your product has a single processor and common inputs and outputs. <br />* Score 3 if your product has uncommon inputs and outputs or if it uses a graphics screen or printer. <br />* Score 4 if your product uses multiple or embedded processors that cannot be fully tested using black-box methods. <br />What Financial Risk Does Your Product Pose for the Company? Both loss of market share and exposure to liability claims can create substantial financial risks for medical device companies. Because all products have a life cycle, the more time a new product spends in the test-and-fix-and-retest cycle, the less time it will spend on the market. Also, when market entry is delayed, sales will be lost even if the product is better than its competition. Even greater losses can occur if a poorly tested product harms someone. The manufacturer will face costly FDA actions and product liability suits. In worst-case scenarios, the product may never return to the market and the company itself will fail. <br />* Score 1 if a malfunction or failure of your product poses no threat to the financial health of your company, from either liability claims or loss of market share. <br />* Score 2 if a malfunction or failure of your product presents a small but acceptable risk to the financial health of your company. <br />* Score 3 if a malfunction or failure of your product presents an unacceptable risk to the financial health of your company. <br />* Score 4 if a malfunction or failure of your product would cause irreparable harm to your company. <br />What Risk Does Your Product Pose for the Patient and Operator? Although concerns about size, complexity, and financial risk are important in all software projects, the bottom line for a medical device company is risk to patients and health-care providers. Medical products must be both safe and effective. That is, they must do what they are designed to do and, when something does go wrong, the malfunction or failure must cause no harm. The product's FDA classification and hazard analysis results may determine if automated testing should be implemented. If a computerized medical device is categorized as Class II or Class III, an automated software test program may be necessary to provide both the testing and documentation required. Similarly, if the product presents software-related hazards, an automated test program might help your company to verify, validate, and document the measures taken to mitigate those hazards. <br />* Score 1 if your product is FDA Class I and a hazard analysis has shown there is no possibility of its software causing harm to a patient or operator. <br />* Score 2 if your product is FDA Class I and a hazard analysis has shown there is a remote possibility of its software causing harm to a patient or operator. <br />* Score 3 if your product is FDA Class II. <br />* Score 4 if your product is FDA Class III. <br />Evaluating Your Scores. In its "Reviewer Guidance for Computer-Controlled Medical Devices," FDA supplies an approach to evaluating the scores assigned in this exercise: "When a level of concern is assigned for each functioning component of the software, the highest level of concern generated is that assigned to the software aspect of the device."7 Thus, if you want to ensure the long-term success of your company, aim for the level of automated software testing equal to your highest score in any category. <br />PROCESS: CONTROLLING TEST POLICIES AND PROCEDURES <br />If any one word sums up the regulatory demands being placed on medical device manufacturers, it is process. No matter how much effort goes into designing, testing, and manufacturing a product, an auditor will not be satisfied if the process is not written down, followed, and documented. Process-related expenses will be incurred regardless of the testing level achieved or whether or not the software test process is automated; however, they can vary significantly across the testing levels. <br />Level 1 Process Costs. When software testing is at level 1, process costs are hidden. They arise from not having a defined process and can be very high, indeed. Such costs can include those incurred by delayed product introductions, the need for frequent field fixes, and a generally ineffective product development effort. <br />Level 2 Process Costs. Surprisingly, process costs can be highest for a company that is testing at level 2, especially one that is contemplating a move to level 3 in the foreseeable future. The costs are high because at level 2 the company is probably just starting to evaluate its software testing needs and to put standardized procedures in place. It may have to experiment, hire consultants, and establish or expand job areas, such as regulatory affairs. <br />Process Costs at Levels 3 and 4. Although the two major forces behind process improvement--FDA regulation and the need for ISO 9000 certification--may affect any company, those testing at levels 3 or 4 almost certainly need to meet FDA software test requirements. Such compliance is expensive and time-consuming, but the good news is that creating and documenting procedures for an automated testing program is no more expensive than doing so for a manual one. In fact, use of an automated test tool with scripting, test identification, and automatic documentation capabilities can reduce costs by providing some of the framework and content required. <br />The FDA "Reviewer Guidance for Computer-Controlled Medical Devices Undergoing 510(k) Review" states that "FDA is focusing attention on the software development process to assure that potential hazardous failures have been addressed, effective performance has been defined, and means of verifying both safe and effective performance have been planned, carried out, and properly reviewed."8 In order to get marketing approval for any product, its manufacturer must prove to FDA that the product does what it is supposed to do and that it is safe. The way to do that is not only through clinical trials but also by documenting the process that was followed to make the product eligible for such trials. <br />In contrast, ISO 9000 certification is based on process alone. Because the products themselves are not certified, the certification authority is concerned solely with whether the process that created the product is traceable, repeatable, and documented. When the process is proven, the site responsible for making the product is certified. An ISO 9000 certification audit costs about $10,000 to $20,000, but that is only the barest tip of the iceberg. The total cost includes the resources required to evaluate the company's needs, get the appropriate procedures in place, have them audited and approved, and motivate personnel to use them. <br />If established procedures are being revised to accommodate automation, existing regulatory affairs and quality assurance personnel may need to devote two to four weeks each to the project. In addition, it may take a technical writer about a month to rewrite the policy and procedure manuals. Finally, occasional technical support will be required from software developers and test engineers. <br />PEOPLE: CHOOSING QUALIFIED TESTERS <br />No matter what type of testing a company does, manual or automated, experienced people are needed to create the test plans and write test scripts. <br />Level 1 People Costs. At test maturity level 1, testing is often limited to debugging. A programmer writes and debugs the product's software until everything seems to work correctly. Because only the programmer is involved, testing costs are hidden in the cost of development. Likewise, the potential benefits of better test practices are hidden in field-support and product- upgrade costs. Thus, level 1 people costs are essentially unknown. <br />Level 2 People Costs. In software testing programs at level 2, testing is recognized as a separate function. Test plans and scripts are generally written by an experienced product user or support person who may or may not have programming experience. In any case, the person performing this task must understand the SRSs and design specifications well enough to write a comprehensive test plan and test scripts. The scripts are then given to testers who run them and record the results. One option is to hire a group of low-paid, inexperienced users; another is to recruit testers in-house. Whoever the testers are, they must understand that their job is to try to break the system as well as to make sure it works right. Level 2 people costs may also include one or more high-level support people to coordinate test writing, supervise the testers, and edit the results. Also, since the labor that goes into setting up a capture-and-replay tool is not reusable, the cost of one test cycle must be multiplied by the number of test cycles expected. <br />People Costs at Levels 3 and 4. Automated testing plans are most often written by a software test engineer, who should also participate in product development meetings with design engineers to help build testability into the product. The test engineer's programming background combined with a familiarity with the product will ensure the creation of efficient tests that attack the weakest parts of the product. If the test tool has white-box test capabilities, the test engineer uses his or her knowledge of system internals to specify tests for functions that cannot be tested manually. <br />The test plan is then used to write the test script programs. This work can be done by the test engineer or given to application programmers. The level of programming experience required to write test scripts depends on the test tool used. Generally, the most versatile tools run on scripts written in some version of a common programming language, such as C. Other tools use simplified languages. In any case, at least one member of the test team must have some familiarity with writing a structured set of instructions. Because the automated testing tool runs the tests and creates the documentation, no costs are added for hiring testers or diverting in-house personnel to perform and document the tests. <br />PRODUCTS: CHOOSING THE RIGHT TESTING TOOL <br />The requirements of the product and process determine the selection of an automated testing tool. However, medical device manufacturers should beware of confusing development aids with automated software test tools. Companies can spend large sums on many kinds of debugging tools and in-circuit emulators and still not have an automated test program. A software development aid has done its job when the product, or product component, is debugged and seems to work. Automated test tools, on the other hand, are designed not only to verify the system, but also to stress it to the point that it will break in the lab before it can fail in the field and harm a patient or operator. <br />Level 1 Tool Costs. Although development aids such as debugging programs and in-circuit emulators may be used in level 1 test programs, no automated test tools are used. Therefore, there are no tool costs at this level. <br />Level 2 Tool Costs. Level 2 testing is the domain of simple capture-and-replay tools that employ rudimentary scripting capabilities and are often used to verify operator interfaces. Prices for such tools start at about $200 and can reach $5000 or more for the more-sophisticated models. The less-expensive, software-only versions are often intrusive; that is, they run on the same computer as the software application being tested. Because the tool and product occupy the same space, product timing and performance can undergo unpredictable changes. Even if no problems show up during testing, the product shipped is never exactly the same as the product tested. Capture-and-replay tools with integral capture hardware eliminate the problems associated with intrusiveness but retain another problem characteristic of such systems--inflexibility. <br />Because a capture-and-replay test suite for a graphic user interface (GUI) can contain thousands of captured screen images and consume megabytes of memory, the time it takes to gather these images is significant. Timing variations and the fact that GUI displays are seldom static can add even more time. Most significant, however, is the amount of time needed to recapture, integrate, and retest the inevitable changes caused by debugging and last-minute product upgrades. Thus, capture-and-replay tools should be used only for the simplest of products. <br />Tool Costs at Levels 3 and 4. High-level test tools can include several advanced capabilities in addition to capture and replay. The following are features to look for when purchasing tools: <br />* Scripting. The tool's test script language should be as functional as a high-level computer language, permitting the inclusion of files, libraries, loops, and conditional statements. It also should include aids to help debug the scripts themselves. <br />* Monitoring. A choice of intrusive software monitoring, such as that used in capture- and-replay tools, or nonintrusive hardware monitoring of system outputs may be available. An added high-level feature in the most sophisticated systems is direct-processor monitoring. With direct-processor monitoring, a connector similar to an in-circuit emulator pod is mounted on the processor and monitors the activity of the product under test. The test tool is nonintrusive because the connector never sends signals to the application being tested. It is also quite fast and accurate because it works at the processor level. <br />* Black-Box Simulation and Stimulation. A high-level tool should be able to emulate the actions of a human tester. Hardware is available that can simulate such product stimulations as keys being pressed, printers responding, tones being generated, relays opening and closing, and other analog or digital inputs. In short, advanced simulation capabilities should enable tests to run unattended. <br />* White-Box Simulation and Stimulation. The test tool should also be able to simulate and monitor the internal workings of the product tested. Such white-box testing capabilities permit testing of timing, integration, and resource issues that cannot be tested manually. <br />* Documentation. Automated test tools can log both test parameters and test results. If integrated into the software development process, a sophisticated system should be able to produce much of the documentation required by regulatory agencies. <br />Test tools suitable for testing at levels 3 and 4 cost from $15,000 to $75,000. <br />CONCLUSION <br />As described above, once you determine your company profile, perfect your processes, establish test specialists, and give the team members appropriate testing tools, your company can realize the benefits of automated software testing. When compared with manual programs, automation properly applied will result in higher-quality products, lower risks to your company and the patients you serve, faster regulatory approvals, and decreased time to market. The higher level you reach on the automated software testing maturity model, the more benefits you will realize. Whatever level you choose, however, keep in mind a major lesson of the last 30 years of computing: No matter what tools you buy, your largest investment by far will be in the processes and people you put in place to use those tools. Purchase automated software testing tools based on how they can maximize your investments in processes and people, not on the price of the tools themselves.Rajesh Babu Rajamanickamhttp://www.blogger.com/profile/16036650026261051481noreply@blogger.com0tag:blogger.com,1999:blog-8180223062941193498.post-6514393915041919362008-01-14T16:47:00.001-08:002008-01-14T16:47:59.143-08:00What is the difference between an application server and a Web server?<span style="font-weight:bold;">What is the difference between an application server and a Web server?</span><br /><br />Taking a big step back, a Web server serves pages for viewing in a Web browser, while an application server provides methods that client applications can call. A little more precisely, you can say that: <br />A Web server exclusively handles HTTP requests, whereas an application server serves business logic to application programs through any number of protocols. <br /><br />Let's examine each in more detail.<br />The Web server<br />A Web server handles the HTTP protocol. When the Web server receives an HTTP request, it responds with an HTTP response, such as sending back an HTML page. To process a request, a Web server may respond with a static HTML page or image, send a redirect, or delegate the dynamic response generation to some other program such as CGI scripts, JSPs (JavaServer Pages), servlets, ASPs (Active Server Pages), server-side JavaScripts, or some other server-side technology. Whatever their purpose, such server-side programs generate a response, most often in HTML, for viewing in a Web browser. <br />Understand that a Web server's delegation model is fairly simple. When a request comes into the Web server, the Web server simply passes the request to the program best able to handle it. The Web server doesn't provide any functionality beyond simply providing an environment in which the server-side program can execute and pass back the generated responses. The server-side program usually provides for itself such functions as transaction processing, database connectivity, and messaging. <br />While a Web server may not itself support transactions or database connection pooling, it may employ various strategies for fault tolerance and scalability such as load balancing, caching, and clustering—features oftentimes erroneously assigned as features reserved only for application servers. <br />The application server<br />As for the application server, according to our definition, an application server exposes business logic to client applications through various protocols, possibly including HTTP. While a Web server mainly deals with sending HTML for display in a Web browser, an application server provides access to business logic for use by client application programs. The application program can use this logic just as it would call a method on an object (or a function in the procedural world). <br />Such application server clients can include GUIs (graphical user interface) running on a PC, a Web server, or even other application servers. The information traveling back and forth between an application server and its client is not restricted to simple display markup. Instead, the information is program logic. Since the logic takes the form of data and method calls and not static HTML, the client can employ the exposed business logic however it wants. <br />In most cases, the server exposes this business logic through a component API, such as the EJB (Enterprise JavaBean) component model found on J2EE (Java 2 Platform, Enterprise Edition) application servers. Moreover, the application server manages its own resources. Such gate-keeping duties include security, transaction processing, resource pooling, and messaging. Like a Web server, an application server may also employ various scalability and fault-tolerance techniques. <br />How do Web and application servers fit into the enterprise?<br />An example<br />As an example, consider an online store that provides real-time pricing and availability information. Most likely, the site will provide a form with which you can choose a product. When you submit your query, the site performs a lookup and returns the results embedded within an HTML page. The site may implement this functionality in numerous ways. I'll show you one scenario that doesn't use an application server and another that does. Seeing how these scenarios differ will help you to see the application server's function. <br />Scenario 1: Web server without an application server<br />In the first scenario, a Web server alone provides the online store's functionality. The Web server takes your request, then passes it to a server-side program able to handle the request. The server-side program looks up the pricing information from a database or a flat file. Once retrieved, the server-side program uses the information to formulate the HTML response, then the Web server sends it back to your Web browser. <br />To summarize, a Web server simply processes HTTP requests by responding with HTML pages.<br />Scenario 2: Web server with an application server<br />Scenario 2 resembles Scenario 1 in that the Web server still delegates the response generation to a script. However, you can now put the business logic for the pricing lookup onto an application server. With that change, instead of the script knowing how to look up the data and formulate a response, the script can simply call the application server's lookup service. The script can then use the service's result when the script generates its HTML response. <br />In this scenario, the application server serves the business logic for looking up a product's pricing information. That functionality doesn't say anything about display or how the client must use the information. Instead, the client and application server send data back and forth. When a client calls the application server's lookup service, the service simply looks up the information and returns it to the client. <br />By separating the pricing logic from the HTML response-generating code, the pricing logic becomes far more reusable between applications. A second client, such as a cash register, could also call the same service as a clerk checks out a customer. In contrast, in Scenario 1 the pricing lookup service is not reusable because the information is embedded within the HTML page. To summarize, in Scenario 2's model, the Web server handles HTTP requests by replying with an HTML page while the application server serves application logic by processing pricing and availability requests. <br />Caveats<br />Recently, XML Web services have blurred the line between application servers and Web servers. By passing an XML payload to a Web server, the Web server can now process the data and respond much as application servers have in the past. <br />Additionally, most application servers also contain a Web server, meaning you can consider a Web server a subset of an application server. While application servers contain Web server functionality, developers rarely deploy application servers in that capacity. Instead, when needed, they often deploy standalone Web servers in tandem with application servers. Such a separation of functionality aids performance (simple Web requests won't impact application server performance), deployment configuration (dedicated Web servers, clustering, and so on), and allows for best-of-breed product selection. <br />Difference between Database Server and Application Server <br />An application server has applications installed on it which users on the network can then run as if they were installed on the workstation they are using.<br />A database server has programs installed that allow it to provide database services over a network. The data, queries, report generatores<br />[or]<br />An application server has applications installed on it which users on the network can then run as if they were installed on the workstation they are using.<br /><br />A database server has programs installed that allow it to provide database services over a network. The data, queries, report generatores, etc are all stored on the server, while the client machines use a front end to access those services.<br />[or]<br />Whats the difference between an Application Server and a Web Server?<br />So an app Server handles the middle-tier business logic and the web<br />server only handles the web requests. Is that right?. So is Apache<br />TomCat a Web Server or a an App Server?..coz Apache has what they call<br />an HTTP Server, which im presuming is a Web Server, so then whats the<br />difference between Tomcat and the Apache HTTP Server ?. The definition<br />for Tomcat says it is a servlet container which confuses me even more..<br />is this a third category altogether or is it a polite way of saying "App<br />Server for J2EE" ?<br />Also, I know for a fact that Sybase's EAServer is both App Server and<br />Web Server combined. What about IBM Websphere? Is it App or Web or both<br />or neither ? And what App servers are usually used with IIS?..Is that<br /><br />Difference between TOMCAT server and WEBLOGIC server <br />1. Tomcat is an webserver it only runs Servlets ans jsp but in case of Weblogic server is an App server it can run EJBs also<br />2. Servlets are pure java class jsp <br />Abstract<br />The corporate databases can be linked to the Web in a manner that allows clients or employees to access to corporate data through a Web browser. This paper first describes the bridge between the Web and corporate databases and discusses a series of related concepts. Secondly, a number of linking methods and their analysis are presented. Thirdly, an example web-based application developed using different linking methods is described. Finally, application architecture analysis and preliminary performance measurement results are reported.<br />Keywords<br />World Wide Web (Web), Database connectivity, Performance Measurement<br />Introduction<br />The World Wide Web (known as "WWW" or "Web") is growing at a phenomenal rate. The current Web is largely based on file system technology, which can deal well with the resources that are primarily static. However, with the unprecedented growth of resources, it is no longer adequate to rely on this conventional file technology for organising, storing and accessing large amount of information on the Web. Thus, many large Web sites today are turning to database technology to keep track of the increasing amount of data. Database technology has played a critical role in the information management field during the past years. It is believed that the integration of the Web and database technology will bring many opportunities for creating advanced information management applications (Feng and Lu 1998).<br />With the increasing popularity and advancement of Web technology, many organisations want to Web-enable their existing applications and databases without having to modify existing host-based applications. This not only gives all of the existing applications a common, modern look and feel but also can deploy them on corporate Intranets, the public Internet, and newer Extranets (Lu et al.1998).<br />Taking simple data from a database and placing it on the Web is a relatively simple task. However, in most cases, the corporate data is maintained in a variety of sources, including legacy, relational, and object databases. It is much more complicated when these diverse data sources must be queried or updated (Carriere and Kazman1997). The methods, techniques, and tools are in great demand to bridge the gap between the Web and database applications so that smooth, interactive, and integrated Web-to-database applications are made possible (Frey 1996).<br />There are many players in the industry taking this challenge. These include major database vendors, mainframe vendors, third party software firms, Web browser vendors, and Web server vendors. A wide range of tools and philosophies has been proposed for connecting and integrating the Web and databases (Kim 1997). In last paper (Lu et al. 1998), we presented a formal specification of web-to-database interfacing models. It is believed that web-based application architecture using different interfacing and integrating methods has much impact on the application's performance (Lazar and Holfelder 1997). This paper is to present our study on this issue.<br />This paper discusses the approaches and models in Web-to-database connecting technologies based on some results of the last paper. The remainder of the paper is organised in four sections. Section 2 describes the bridge between the Web and corporate databases and gives related concepts used. A number of linking methods and their analysis are provided in Section 3. An example web-based application is described in Section 4. Application architecture analysis and preliminary performance measurement results are also presented in Section 4. Conclusions and future work are reported in Section 5.<br />The Bridge Between The Web And Databases<br />Delivering data over the Web is cost effective and fast, and gives Internet users easy access to databases from any locations. Users hope to access databases via Web browsers with the same functions as provided by normal database application software. Businesses want to provide their users or customers various functions such as purchasing goods, tracking orders, searching through catalogues, receiving customised content, and viewing interesting graphics. The Web-to-database integration has become central to the jobs of corporate information systems construction.<br />Making database information available to Web users requires converting it from the database format to a markup language such as HTML or XML. Database packages store information in files optimised for quick access by front-end programs. When the Web server sends information to a client, the internal database format must be converted to HTML so that it is displayed correctly (Reichard 1996). A bridge between the Web and databases needs to be built. This bridge lets the Web browser replace the front-end program normally used to access the corporate databases.<br />Web-to-database connecting technology<br />To build a bridge between Web and enterprise database, a number of alternative technologies and architectures have been available. These include:<br />• CGI (Common Gateway Interface) is a Web standard for accessing external programs, to integrate databases with Web servers. The CGI dynamically generates HTML documents from back-end databases; <br />• Web server APIs, such as Microsoft's Information Server API (ISAPI), Netscape API (NSAPI), are invoked by third party software to access remote databases; <br />• Web-ODBC (Open Database Connectivity) gateways rely on an open API (Application Programming Interface) to access to database systems; <br />• Vendor-specific Web browser/data warehousing interfaces are in response to the inherent advantages of the two technologies; <br />• JDBC (Java Database Connectivity) is used in its Java programming language to program Java applets to access back-end databases. <br />Each of the above technologies has strengths and weaknesses. Several factors should be considered when making selections. These include the complexity of data, the speed of deployment, the expected number of simultaneous users, and the frequency of database updates. However, new technology is emerging and several tools are already available that make this Web-to-database access optimised for improved performance (Carriere and Kazman 1997).<br />Database middleware<br />Generally, middleware can be said to be the glue (or logic) that lies between clients and servers. It deals with all the "grim stuff" of incompatible operating systems and file structures (Bernstein 1996). Programmers on both client and server ends use APIs for requesting or receiving services and data. Middleware is used to connect diverse products that do not have a common language. There are five different kinds of middleware: object request brokers (ORB), message-oriented middleware (MOM), database middleware, transaction-processing (TP) monitors middleware, and remote procedure call (RPC) middleware (Lu et al. 1998).<br />Middleware technology is becoming popular to connect databases with the Web. Middleware is in the midst of an evolutionary growth spurt. As it relates to the Web, the middle tier will evolve to play an important role in things such as enabling advanced multitier-application deployment, using the Web for distributed transactional systems, managing multiple execution environments with Java, C++, and ActiveX, and providing the links to existing mission-critical information resources.<br />Analysis of Different Connecting Methods<br />CGI<br />CGI is a standard for interfacing external programs with Web servers. The server submits client requests encoded in URLs to the appropriate registered CGI program, which executes and returns results encoded as MIME messages back to the server. CGI's openness avoids the need to extend HTTP. Most vendors of Web server extension tools continue to support CGI even as more advanced APIs have been added. This is due to the fact that many prewritten scripts are freely available for a variety of platforms and most of the popular Web servers.<br />CGI programs are executable programs that run on the Web server. They can be written in any scripting language (interpreted) or programming language (must be compiled first) available to be executed on a Web server, including C, C++, Fortran, PERL, TCL, Unix shells, Visual Basic, Applescript, and others. Arguments to CGI programs are transmitted from client to server via environment variables encoded in URLs. The CGI program typically returns HTML pages on the fly (Deep and Holfelder 1996). CGI lets Webmasters add common features, such as counters and date/time displays, on-line order forms, chat pages and search engines.<br />CGI also has several drawbacks. Each time a CGI script is spawned, it creates an additional process on the server machine, slowing the server's response time. Also, if the CGI script is not set up correctly, security holes can occur on the server, rendering the Web site vulnerable to attacks by hackers. Another problem is that it is difficult to maintain state - that is, to preserve information about the client from one HTTP request to the next (Deep and Holfelder 1996).<br />CGI is an early Web-to-database integration mechanism that is being replaced by more complex software programs that lie between the Web and database servers.<br />Server API<br />An alternative to modifying or extending the abilities of the server is to use its API. APIs allow the developer to modify the server's default behaviour and give it new capabilities. In addition to addressing some of the drawbacks of CGI, the use of an API offers other features and benefits, such as the ability to share data and communications resources with a server, the ability to share function libraries, and additional capabilities in authentication and error handling. Because an API application remains in memory between client requests, information about a client can be stored and used again when the client makes another request (Frey 1996).<br />There are, however, some drawbacks to this approach. Unlike CGI, API functions are server-specific, because each server has a different API. Buggy API code can crash a server. And more complexity is involved in developing the code, which must manage multiple process threads and clean up memory after it is run.<br />ODBC and JDBC<br />ODBC and JDBC are types of database access middleware. ODBC is, by far, the most popular database access middleware in use today. Vendor support for ODBC is pervasive. JDBC support isn't quite at the level of ODBC support, but JDBC is growing and flourishing. Database vendors and several third-party software houses offer ODBC and JDBC drivers for a variety of databases and operating environments.<br />From a network administrator's point of view, they consist of client and server driver software (i.e., program files). From a programmer's point of view, they are APIs that the programmer inserts in his or her software to store and retrieve database content. While a system analyst perceives ODBC or JDBC as a conceptual connection between the application and the database, database vendors regard ODBC and JDBC as ways to entice customers who say they want to use industry standard interfaces rather than proprietary ones. And managers of data processing department view ODBC and JDBC as insurance interfaces that offer managers some measure of flexibility should they find it necessary to replace one database product with another (Wong 1997).<br />ODBC technology now allows Web servers to be used to directly connect with databases, rather than using third party solutions. JDBC can also directly access server ODBC drivers through a JDBC/ODBC Bridge driver, available from SunSoft. ODBC driver vendors are also building bridges from ODBC to JDBC. JDBC is intended for developing client/server applications to access a wide range of backend database resources.<br />As more and more web-based applications are built by using different bridging methods as discussed above, it is significant to investigate how to measure the performance of each method in a consistent and fair manner. Next section will describe an application implemented by using three main bridging methods discussed in this section.Rajesh Babu Rajamanickamhttp://www.blogger.com/profile/16036650026261051481noreply@blogger.com0tag:blogger.com,1999:blog-8180223062941193498.post-84482819226960544682008-01-14T16:46:00.000-08:002008-01-14T16:47:01.623-08:00INTERNET INFORMATION SERVICES<span style="font-weight:bold;">INTERNET INFORMATION SERVICES<br />IIS 6.0</span><br /><br />Contents<br />Introduction 5<br />The Application Server Role 6<br />Configuring the Application Server 6<br />Configure Your Server Wizard 6<br />Add/Remove Components Application 6<br />IIS 6.0 Architecture—A New Request Processing Architecture 7<br />HTTP.sys 8<br />WWW Service Administration and Monitoring Component 8<br />Server Configuration 8<br />Worker Process Management 8<br />Worker Process Isolation Mode 8<br />IIS 5.0 Isolation Mode 11<br />Benefits of the IIS 6.0 Request Processing Architecture 12<br />Security Enhancements 13<br />Locked Down Server 13<br />Secure Configuration for Web Servers 13<br />Multiple Levels of Security 13<br />Table 1: Security Levels in IIS 6.0 14<br />Unlocking Functionality with IIS 6.0 Web Service Extensions 14<br />Configurable Worker Process Identity 15<br />IIS 6.0 Runs as a Low-Privileged Account by Default 15<br />SSL Improvements 15<br />Authorization and Authentication 16<br />URL Authorization and Extending the New Authorization Framework 16<br />Constrained, Delegated Authentication 17<br />Manageability Enhancements 18<br />XML Metabase 18<br />Automatic Configuration Versioning and History 19<br />Edit-While-Running Feature 19<br />Export and Import Configuration 19<br />Server Independent Backup and Restore 19<br />Metabase Auditing 20<br />Benefits of the IIS 6.0 XML Metabase 20<br />IIS 6.0 WMI Provider 20<br />Command-Line Administration 21<br />New Web-based Administration Console 21<br />Performance and Scalability Enhancements 22<br />HTTP.sys—New Kernel-Mode Driver 22<br />Caching Policy & Thread Management 23<br />Web Gardens 23<br />Persisted ASP Template Cache 23<br />Large Memory Support for x86.0 23<br />Site Scalability 23<br />Reclaiming Resources for Idle Applications 24<br />Application Platform Enhancements 25<br />ASP.NET and IIS 6.0 Integration and Variety of Language Choices 25<br />ExecuteURL 25<br />Replacing Read Raw Data Filters 25<br />Global Interceptors 26<br />ISAPI Filters 26<br />VectorSend 26<br />Caching of Dynamic Content 26<br />ReportUnhealthy 26<br />Custom Errors 27<br />Unicode ISAPI 27<br />New COM+ Services in ASP 27<br />Fusion Support 27<br />Partition Support 27<br />Tracker Support 28<br />Apartment Model Selection 28<br />Platform Improvements 29<br />64-bit Support 29<br />IPv6.0 Support 29<br />Granular Compression 29<br />Resource Accounting and Quality-of-Service (QoS) 29<br />Tracing Improvements: 30<br />Logging Improvements 31<br />UTF-8 Logging Support 31<br />Binary Logging 31<br />Logging of HTTP Substatus Codes 31<br />W3C Centralized Logging 31<br />File Transfer Protocol (FTP) 32<br />FTP User Isolation 32<br />Configurable PASV Port Range 32<br />Improved Patch Management 32<br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br />Introduction<br />This document provides a technical overview of Internet Information Services (IIS) 6.0, the next generation Web server available in all versions of Microsoft® Windows® Server 2003. IIS 6.0 introduces many new features that can help increase the reliability, manageability, scalability, and security of your Web application infrastructure. IIS 6.0 is a key component of the Windows Server 2003 application platform, an integrated set of services and tools that enable the development and deployment of high-performance Web sites, Web applications, and Web services. The benefits of deploying IIS 6.0 include less planned and unplanned system downtime, increased Web site and application availability, lower system administration costs, server consolidation (reduced staffing, hardware, and site management costs), and a significant increase in Web infrastructure security<br /><br /><br />Topics covered in this white paper include:<br />• The Application Server Role<br />• IIS 6.0 Architecture—A New Request Processing Architecture<br />• New Security Features <br />• New Manageability Features<br />• New Performance and Scalability Features<br />• New Programmatic Features<br />• Platform Improvements<br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br />The Application Server Role<br /><br />Application server is a new server role for the Windows Server 2003 family of products that combines together following server technologies:<br />• Internet Information Services (IIS) 6.0<br />• Microsoft .NET Framework<br />• ASP.NET<br />• ASP<br />• UDDI Services<br />• COM+<br />• Microsoft Message Queuing (MSMQ) <br />The application server role combines these technologies into a cohesive experience, giving Web application developers and administrators the ability to host dynamic applications, such as database driven Microsoft ASP.NET applications, without the need to install any other software on the server.<br />Configuring the Application Server<br />The application server is configurable in two places in Windows Server 2003: the Configure Your Server wizard and the Add/Remove Components application. <br />Configure Your Server Wizard<br />The Configure Your Server (CYS) wizard, which is a central point for configuring Windows Server 2003 roles, now includes the application server role. To access the Configure Your Server wizard, click Add or Remove Roles from the Manage Your Server page. This role replaces the existing Web server role. After this new role is installed, the Manage Your Server page, which also includes an entry for the new role.<br />Add/Remove Components Application<br />The application server is also included in the Windows Server 2003 Add/Remove Components application as a top-level optional component. Server applications that belong to the application server (IIS 6.0, ASP.NET, COM+, and MSMQ) can be installed and have their sub-components configured using Add/Remove Components. Using Add/Remove Components to configure the application server gives more granular control over the specific sub-components that will be installed.<br /><br /><br /><br />IIS 6.0 Architecture—A New Request Processing Architecture<br />Web site and application code is becoming increasingly complex. Dynamic Web sites and applications might contain imperfect code that leaks memory or causes errors such as access violations. Therefore, a Web server must be an active manager of the application run-time environment and automatically detect and respond to application errors. When an application error occurs, the server needs to be fault-tolerant, meaning it must actively recycle and restart a faulty application while continuing to queue requests for the application and not interrupting the end-user’s experience. IIS 6.0 features a new fault-tolerant request processing architecture that has been designed to provide this robust and actively managed runtime, and achieve dramatically increased reliability and scalability by combining a new process isolation model, called worker process isolation mode, with performance enhancements such as kernel mode queuing and caching.<br />The previous version of IIS, IIS 5.0, was designed to have one process, named Inetinfo.exe, function as the main Web server process. This process transferred requests to “out of process” applications hosted in DLLHost.exe processes. In comparison, IIS 6.0 has been redesigned into two new components, a kernel-mode HTTP protocol stack (HTTP.sys) and a user-mode administration and monitoring component. This architecture allows IIS 6.0 to separate the operations of the Web server from the processing of Web site and application code—without sacrificing performance. These two major components of the IIS 6.0 fault-tolerant architecture are:<br />• HTTP.sys. A kernel-mode HTTP protocol stack that queues and parses incoming HTTP requests, and caches and returns application and site content. HTTP.sys does not load any application code, it simply parses and routes requests.<br />• WWW Service Administration and Monitoring component. A user-mode configuration and process manager that manages server operations and monitors the execution of application code. Like HTTP.sys, this component doesn’t load or process any application code. <br />Before discussing these components, it is important to introduce two new IIS 6.0 concepts: application pools and worker processes.<br />Application pools are used to manage a set of Web sites and applications. Each application pool corresponds to one request queue within HTTP.sys and the one or more Windows processes that process these requests. IIS 6.0 can support up to 2,000 application pools per server, and there can be multiple application pools operating at the same time. For example, a departmental server might have HR in one application pool and finance in another application pool. An Internet Service Provider (ISP) might have the Web sites and applications of one customer in one application pool, and the Web sites of another customer in a different application pool. Application pools are separated from other application pools by Windows Server 2003 process boundaries. Therefore, an application in one application pool is not affected by applications in other application pools, and an application request cannot be routed to another application pool while being serviced by the current application pool. Applications can easily be assigned to another application pool while the server is running.<br />A worker process services requests for the Web sites and applications in an application pool. All Web application processing, including loading of ISAPI filters and extensions, as well as authentication and authorization, is done by a new WWW service DLL, which is loaded into one or more host worker processes. The worker process executable is named W3wp.exe.<br />HTTP.sys<br />In IIS 6.0, HTTP.sys listens for requests and queues them appropriately. Each request queue corresponds to one application pool. Because no application code runs in HTTP.sys, it cannot be affected by failures in user-mode code that normally affect the status of the Web service. If an application fails, HTTP.sys continues to accept and queue new requests on the appropriate queue until one of the following: the process has been restarted and begins to accept requests, there are no queues available, there is no space left on the queues, or the Web service itself has been shut down by the administrator. Because HTTP.sys is a kernel-mode component, its queuing operation is especially efficient, enabling the IIS 6.0 architecture to combine process isolation with high performance request processing.<br />Once the WWW service notices the failed application, it starts a new worker process if there are outstanding requests still waiting to be serviced for the worker process’s application pool. Thus, while there may be a temporary disruption in user-mode request processing, an end user does not experience the failure, because requests continue to be accepted and queued.<br />WWW Service Administration and Monitoring Component<br />The WWW Service Administration and Monitoring component makes up a core portion of the WWW service. Like HTTP.sys, no application code runs in the WWW Service Administration and Monitoring component. This component has two primary responsibilities: system configuration and worker process management.<br />Server Configuration<br />At initialization time, the configuration manager portion of WWW service uses the in-memory configuration metabase to initialize the HTTP.sys namespace routing table. Each entry in the routing table contains information that routes incoming URLs to the application pool that contains the application associated with the URL. These pre-registration steps inform HTTP.sys that there is an application pool that responds to requests in a particular part of the namespace, and that HTTP.sys can request that a worker process be started for the application pool when a request arrives. All pre-registrations are done before HTTP.sys begins to route requests to individual processes. As application pools and new applications are added, the Web service configures HTTP.sys to accept requests for the new URLs, sets up the new request queues for the new application pools, and indicates where the new URLs should be routed. Routing information can change dynamically without requiring a service restart. <br />Worker Process Management<br />In the worker process management role, the WWW Service Administration and Monitoring component is responsible for controlling the lifetime of the worker processes that process the requests. This includes determining when to start, recycle, or restart a worker process, if it is unable to process any more requests (becomes blocked). It is also responsible for monitoring the worker processes, and can detect when a worker process has terminated unexpectedly. <br />Worker Process Isolation Mode<br />IIS 6.0 introduces a new application isolation mode for managing the processing of Web sites and applications: worker process isolation mode. Worker process isolation mode runs all application code in an isolated environment without incurring a performance penalty for that isolation. Applications can be completely isolated from each other, where one application error does not affect another application in a different process, using application pools. Requests are pulled directly from the kernel instead of having a user-mode process pull them from the kernel for the application, and then route accordingly to another user-mode process. First, HTTP.sys routes Web site and application requests to the correct application pool queue. Then, the worker processes serving the application pool pull requests directly from the application queue in HTTP.sys. This model eliminates the unnecessary process hops encountered when sending a request to an out-of-process DLLHost.exe and back again (as was the case in IIS 4.0 and 5.0), and increases performance.<br />It is important to note that, in IIS 6.0, there is no longer any notion of in-process applications. All necessary HTTP application run-time services, such as ISAPI extension support, are equally available in any application pool. This design prevents a malfunctioning Web site or application from disrupting the operation of other Web applications or the server itself. With IIS 6.0 it is now possible to unload in-process components without having to take down the entire Web service. The host worker process can be taken down temporarily without affecting other worker processes serving content. There is also a benefit from being able to leverage other operating system services available at the process level (for example CPU throttling), per application pool. Additionally, Windows Server 2003 has been re-architected to support many more concurrent processes than ever before.<br />Worker process isolation mode prevents one application or site from stopping another. In addition, separating applications or sites into separate worker processes simplifies a number of management tasks, for example: taking a site/application online or offline (independent of all other site/applications running on the system), changing a component the application uses, debugging an application, monitoring counters for an application, and throttling resources used by an application.<br />Features of IIS 6.0 worker process isolation mode include:<br />• Kernel-mode caching. Windows Server 2003 introduces a new kernel-mode HTTP driver called HTTP.sys, which is specifically tuned to increase Web server performance and scalability. Kernel-mode caching is available when using IIS 6.0, both in worker process isolation mode and in IIS 5.0 isolation mode (see below). As a single point of contact for all incoming (server-side) HTTP requests, HTTP.sys provides high-performance connectivity for HTTP server applications and provides overall connection management, bandwidth throttling, and Web server logging. IIS 6.0 has been built on top of HTTP.sys and has been specifically tuned to increase Web server throughput. In addition, under specific circumstances, HTTP.sys directly processes requests in the kernel. Both static and dynamic content from Web sites and applications can be cached in the HTTP.sys cache for high-performance responses.<br />• Clean separation between user code and the server. All user code is handled by worker processes, which are completely isolated from the core Web server. This improves upon IIS 5.0, because an ISAPI can be, and often is, hosted in-process to the core Web server. If an ISAPI loaded in a worker process fails or causes an access violation, the only thing taken down is the worker process that hosts the ISAPI. Meanwhile, the WWW service creates a new worker process to replace the failed worker process. The other worker processes are unaffected.<br />• Multiple application pools. With IIS 5.0, applications can be pooled together out-of-process, but only in one application pool, which is hosted by DLLHost.exe. When IIS 6.0 operates in worker process isolation mode, administrators can create up to 2,000 application pools, where each application pool can be configured separately.<br />• Better support for load balancers. With the advent of application pools, IIS 6.0 has a well-defined physical separation of applications; it is quite feasible to run hundreds or even thousands of sites/applications side by side on one IIS 6.0 server. In worker process isolation mode, it is important that errors in one application do not affect other applications. IIS 6.0 can also automatically communicate with load balancers/switches to route away only the traffic for a problematic application, while still allowing the server to accept requests for the other, healthy applications. For example, imagine a server processing requests for applications A and B. If application B fails so often that IIS 6.0 decides to automatically shut it down (see section on rapid fail protection below), the server should still be able to receive requests for application A. IIS 6.0 also has a built-in extensibility model that can fire events and commands when the WWW service detects a specific application’s failure. This configuration ability allows load balancers and switches to be configured to automatically stop routing traffic to problematic applications while still routing traffic to healthy applications. <br />• Web gardens. Multiple worker processes can be configured to service requests for a given application pool. By default, each application pool has only one worker process. However, an application pool can be configured to have a set of N equivalent worker processes that share the workload. This configuration is known as a Web garden because it is similar in nature to a Web farm, the difference being that a Web garden exists within a single server. Requests are distributed by HTTP.sys among the set of worker processes in the group. The distribution of requests is based on a round-robin scheme, where new connections with requests for the application pool are assigned to specific worker processes in that application pool. A benefit to Web gardens is that if one worker process slows down, such as when the script engine becomes unresponsive, there are other worker processes available to accept and process requests.<br />• Health monitoring. The WWW Service Administration and Monitoring Component monitors the health of applications by pinging worker processes periodically to determine if they are completely blocked. If a worker process is blocked, the WWW service terminates the worker process and creates another worker process in its place. The WWW service maintains a communication channel to each worker process and can easily tell when a worker process fails by detecting a drop in the communication channel.<br />• Processor affinity. Worker processes can have an affinity to specific CPUs to take advantage of more frequent CPU cache (L1 or L2) hits. Processor affinity, when implemented, forces IIS 6.0 worker processes to run on specific microprocessors or CPUs and applies to all worker processes serving the Web sites and applications of an application pool. Processor affinity can also be used with Web gardens that run on multiprocessor computers where clusters of CPUs have been dedicated to specific application pools.<br />• Allocating sites and applications to application pools. In IIS 6.0, as in IIS 5.0, applications are defined as those namespaces that are labeled in the metabase with the AppIsolated property. Sites, by default, are considered to be a simple application—where the root namespace “/” is configured as an application. An application pool can be configured to serve anything—from one Web application to multiple applications, up to multiple sites. You can assign an application to an application pool using IIS Manager or directly editing the metabase.<br />• Demand start. Application pools get benefits. For example, on-demand starting of the processes that service the namespace group, when the first request for a URL in that part of the namespace arrives at the server. The WWW Service Administration and Monitoring Component does on-demand process starting, and generally controls and monitors the life cycle of worker processes.<br />• Idle time-out. An application pool can be configured to have its worker processes request a shutdown if they are idle for a configurable amount of time. This is done to free up unused resources. Additional worker processes are started when demand exists for that application pool. (For more information, see the section on Demand Start above.)<br />• Rapid-fail protection. When a worker process fails, it drops the communication channel with the WWW Service Administration and Monitoring component. The WWW Service Administration and Monitoring component detects this failure and takes action, which typically includes logging the event and restarting the worker process. In addition, IIS 6.0 can be configured to automatically disable the worker process if a particular application pool suffers a configurable number of failures in a row in a configured time period. This is known as rapid-fail protection. Rapid-fail protection places the application pool in "out-of-service" mode and HTTP.sys immediately returns a 503–Service Unavailable, out-of-service message to any requests to that portion of the namespace—including requests already queued for that application pool.<br />• Orphaning worker processes. Worker process isolation mode can be configured to “orphan” any worker process that it deems “terminally ill.” For example, if a worker process fails to respond to a ping message in the configured time period, normally the WWW service would terminate that worker process and start a replacement. If “orphaning” is turned on, the WWW service leaves the “terminally ill” worker process running and starts a new process in its place. Also, the WWW service can be configured to run a command on the worker process (like attaching a debugger) when it “orphans” a worker process.<br />• Recycling worker processes. Today, many businesses and organizations have problems with Web applications that leak memory, suffer from poor coding, or have indeterminate problems. This forces administrators to restart their Web servers periodically. In previous versions of IIS, it was not possible to restart a Web site without an interruption of the entire Web server. Worker process isolation mode can be configured to periodically restart worker processes in an application pool to manage faulty applications. Worker processes can be scheduled to restart based on the following criteria: elapsed time, number of requests served, scheduled times during a 24-hour period, virtual memory usage, physical memory usage, and on demand. When a worker process wants to restart, it notifies the WWW service which then tells the existing worker process to shut down and gives a configurable time limit for the worker process to drain its remaining requests. Simultaneously, the WWW service creates a replacement worker process for the same namespace group, and the new worker process is started before the old worker process stops. This process prevents service interruptions. The old worker process remains in communication with HTTP.sys to complete its outstanding requests, and then shuts down normally, or is forcefully terminated if it does not shut down after a configurable time limit.<br />IIS 5.0 Isolation Mode<br />Some applications may not be compatible with IIS 6.0 worker process isolation mode such as applications written as read raw data filters or applications that depend on running in Inetinfo.exe or DLLHost.exe. Therefore, IIS 6.0 has the ability to run in another application isolation mode, called IIS 5.0 isolation mode, to ensure compatibility. IIS 5.0 isolation mode is very similar to IIS 5.0, because the same essential user mode processes exist. In particular, the same methods of application isolation (low, medium/pooled, and high) exist, and Inetinfo.exe is still the master process through which each request must transverse. However, despite these similarities, IIS 5.0 isolation mode benefits from the kernel-mode performance of HTTP.sys request-queuing and kernel-mode caching. Note that other IIS 6.0 services such as File Transfer Protocol (FTP), Network News Transfer Protocol (NNTP), and Simple Mail Transfer Protocol (SMTP), still work as they did in IIS 5.0 and are still contained within Inetinfo.exe. Only the WWW service in IIS 6.0 has been changed to pull requests from HTTP.sys.<br />Benefits of the IIS 6.0 Request Processing Architecture<br />The IIS 6.0 request processing architecture delivers very high levels of reliability without sacrificing performance.<br />• Increased reliability. IIS 6.0 worker process isolation mode prevents Web sites and applications from affecting each other or the server as a whole.<br />• Fewer server restarts. The user will likely never need to restart the server or shut down the entire WWW service, due to a failed application or common administration operations, such as upgrading content or components, or debugging Web applications.<br />• Increased application availability. IIS 6.0 supports auto restart of failed applications and periodic restart of leaky/malfunctioning applications, or applications with faulty code.<br />• Increased scalability. IIS 6.0 supports scaling to ISP scenarios, where there may be hundreds to thousands of sites on a server. IIS 6.0 also supports Web gardens, where a set of equivalent worker processes on a server each receive a share of the requests that are normally served by a single worker process. <br />• Strong application platform support. IIS 6.0 supports the application as the unit of administration. This includes making the application the unit of “robustness” by enabling application isolation, and also enabling resource throttling and scaling based on the application.<br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br />Security Enhancements<br />Security has always been an important aspect of Internet Information Services. However, in previous versions of the product (e.g. IIS 5.0 running on Windows 2000 Server), the server was not shipped in a “locked down” state by default. Many unnecessary services, such as Internet printing, were on at installation. Hardening the system was a manual process, and many organizations simply left their server settings unchanged. This led to widespread vulnerability to attack, because although each server could be made secure, many administrators did not realize they needed to, or did not have the tools to do so.<br />Microsoft has significantly increased its focus on security since the development of previous versions of IIS. For example, in early 2002, the development work of all Windows engineers—more than 8,500 people—was put on hold while the company conducted intensive security training. Once the training was completed, the development teams analyzed the Windows code base, including HTTP.sys and IIS 6.0, to implement the new knowledge. This represents a substantial investment to improve the security of the Windows platform. In addition, during the design phase of the product, Microsoft conducted extensive threat modeling to ensure that the company’s software developers understood the type of attacks that the server might face in customer deployments. Also, third-party experts have conducted independent security reviews of the code.<br />Locked Down Server <br />In order to reduce the Web infrastructure attack surface, installing Windows Server 2003 does not install IIS 6.0 by default. Administrators must explicitly select and install IIS 6.0 on all Windows Server 2003 offerings except Windows Server 2003, Web Edition. This means that now IIS 6.0 does not have to be un-installed after Windows has been installed, if it’s not necessary for the server’s role, for instance if the server is deployed to run as a mail or database server. IIS 6.0 will also be disabled when a server is being upgraded to Windows Server 2003, unless the IIS 5.0 Lockdown Tool has been installed prior to upgrade or a registry key has been configured. In addition, IIS 6.0 is configured by default in a “locked down” state when installed. After installation, IIS 6.0 accepts only requests for static files until configured to serve dynamic content, and all time-outs and settings are set to aggressive security defaults. IIS 6.0 can also be disabled using Windows Server 2003 group policies.<br /><br />Secure Configuration for Web Servers <br />Windows Server 2003 Service Pack 1 (SP1) includes a Security Configuration Wizard (SCW), which is a role-based tool you can use to create a policy that enables the services, inbound ports, and settings required for a selected server to perform a specific role. If you select the Web Server role in the wizard, SCW configures IIS 6.0 to help further reduce the attack surface of your Web server. <br /><br />Multiple Levels of Security<br /><br />The following table summarizes the multiple levels of security available in IIS 6.0.<br />IIS 6.0 Security Level Description<br />Not installed by default on Windows Server 2003 Much of security is about reducing the attack surface of your system. Therefore, IIS 6.0 is not installed by default on Windows Server 2003. Administrators must explicitly select and install IIS 6.0.<br />Installs in a locked down state The default installation of IIS 6.0 exposes only minimal functionality. Only static files get served and all other functionality (such as ASP and ASP.NET) has to be enabled explicitly by the administrator.<br />Disabled on upgrades For Windows Server 2003 upgrades to servers with IIS installed, if the administrator did not install and run the Lockdown Tool or configure the RetainW3SVCStatus registry key on the server being upgraded, and then IIS 6.0 will be installed in a disabled state.<br />Disabling via Group Policy With Windows Server 2003, domain administrators can prevent users from installing IIS 6.0 on their computers.<br />Running as a low-privileged account IIS 6.0 worker processes run in a low-privileged user context by default. This drastically reduces the effect of potential attacks.<br />Secure ASP All ASP built-in functions always run as a low-privileged account (anonymous user).<br />Recognized file extensions Only serves requests to files that have recognized file extensions and rejects requests to file extensions it doesn’t recognize.<br />Command-line tools not accessible to Web users Attackers often take advantage of command-line tools that are executable via the Web server. In IIS 6.0, the command-line tools can’t be executed by the Web server.<br />Write protection for content Once attackers get access to a server, they try to deface Web sites. By preventing anonymous Web users from overwriting Web content, these attacks can be mitigated.<br />Time-outs and limits Product settings are set to aggressive and secure defaults.<br />Upload data limitations Administrators can limit the size of data that can be uploaded to a server.<br />Buffer overflow protection Like the rest of Windows, IIS worker processes are compiled with options that are set to monitor the Windows stack and exit the process if a buffer overflow is detected.<br />File verification The core server verifies that the requested content exists before it gives the request to a request handler (ISAPI extension).<br />Table 1: Security Levels in IIS 6.0<br />Unlocking Functionality with IIS 6.0 Web Service Extensions<br />In an effort to reduce the attack surface of your Web server, IIS 6.0 serves only static content after a default installation. Programmatic functionality provided by Internet Server API (ISAPI) extensions or Common Gateway Interfaces (CGI) must be manually enabled by an IIS 6.0 administrator. ISAPI and CGI extend the functionality of your Web pages, and for this reason are referred to as Web Service extensions. For example, in order to run Active Server Pages (ASP) with this version of IIS 6.0, the ISAPI that implements ASP.DLL must be specifically enabled as a Web service extension. Microsoft FrontPage® Server extensions and ASP.NET also have to be enabled before their functionality works. Using the Web Service Extensions feature, Web site administrators can enable or disable IIS 6.0 functionality based on the individual needs of the organization. This functionality is globally enforced across the entire server. IIS 6.0 provides programmatic, command-line, and graphical interfaces for enabling Web service extensions.<br />Configurable Worker Process Identity<br />Running multiple applications or sites on one Web server puts additional requirements on the server. If an ISP hosts two companies, who may even be competitors, on one server, it has to guarantee that these two applications run isolated from each other. More importantly, the ISP has to make sure that a malicious administrator for one application can’t access the data of the other application. IIS 6.0 provides this level of isolation through the configurable worker process identity. Together with other isolation features, like bandwidth and CPU throttling, or memory-based recycling, IIS 6.0 provides an environment to host multiple applications on one server that are completely separated. You can configure the base process identity of an application pool to be specific to the user that runs in that pool to further enhance the isolation of the pool.<br />IIS 6.0 Runs as a Low-Privileged Account by Default<br />By default, a worker process runs as a Network Service account, which is a new built-in account with exactly seven privileges:<br />• Adjust memory quotas for a process<br />• Generate security audits<br />• Logon as a service<br />• Replace process level token<br />• Impersonate a client after authentication<br />• Allow logon locally<br />• Access this computer from the network<br />Running as a low-privileged account is one of the most important security principles. The ability to exploit security vulnerability can be contained effectively if the worker process has very few rights on the underlying system. Administrators can configure the application pool to run as any account (Network Service, Local System, Local Service, or a configured account) if desired.<br />SSL Improvements<br />There are three main secure sockets layer (SSL) improvements in IIS 6.0. They are:<br />• Performance. IIS 5.0 already provides the fastest software-based SSL implementation on the market. As a result, 50% of all SSL Web sites run on IIS 5.0. IIS 6.0 SSL is even faster. Microsoft tuned and streamlined the underlying SSL implementation for even better performance and scalability.<br />• Remotable Certification Object. In IIS 5.0, administrators cannot manage SSL certificates remotely because the cryptographic service provider certificate store is not remotable. Because customers manage hundreds or even thousands of IIS servers with SSL certificates, they need a way to manage certificates remotely. The CertObject allows customers to do this.<br />• Selectable CryptographicService Provider. If SSL is enabled, performance drops dramatically, because the CPU has to perform a lot of intensive cryptography. However, there are hardware-based accelerator cards that enable the offloading of these cryptographic computations to hardware. Cryptographic Service Providers can then plug their own Crypto API provider into the system. With IIS 6.0, it’s easy to select such a third-party Crypto API provider.<br />• Kernel-Mode SSL. You can run SSL in kernel mode, instead of the default user mode. Running in kernel mode means that components or processes run in the core address space of the operating system. Moving encryption and decryption operations to the kernel improves SSL performance by reducing the number of transitions between kernel mode and user mode. Enabling kernel-mode SSL requires setting a new registry key, EnableKernelSSL.<br />Authorization and Authentication<br />If authentication answers the question “Who are you?” then authorization answers the question “What can you do?” Authorization is about allowing or denying a user to conduct a certain operation or task. Windows Server 2003 integrates .NET Passport as a supported authentication mechanism for IIS 6.0. IIS 6.0 extends the use of a new authorization framework that comes with the Windows Server 2003 family. Additionally, Web applications can use URL authorization in tandem with Authorization Manager to control access. Constrained, delegated authorization was added in Windows Server 2003 to provide domain administrators with control to allow delegation to particular machines and services only.<br />.NET Passport Integration with IIS 6.0<br />The integration of .NET Passport with IIS 6.0 provides .NET Passport authentication services in the core Web server. .NET Passport 2.0 uses interfaces provided by standard Passport components such as Secure Sockets Layer (SSL) encryption, HTTP redirects, and cookies. Administrators can make their Web sites and applications available to the entire .NET Passport customer base, which is comprised of over 150,000,000 users, without having to deal with account management issues such as password expiration or provisioning. After a user has been authenticated, the user’s .NET Passport Unique ID (PUID) can be mapped to an account in Microsoft Active Directory® directory services—if such provisioning has been configured for your Web sites. A token is created by the Local Security Authority (LSA) for the user and set by IIS 6.0 for the HTTP request. Application developers and Web site administrators can use this security model for authorization based on Active Directory users. These credentials can also be delegated by using the new Constrained Delegation feature that is supported in Windows Server 2003.<br />URL Authorization and Extending the New Authorization Framework<br />Today, access control lists (ACLs) are used to make authorization decisions. The problem is that the ACL model is very “object driven” (focused on the file, directory objects) and has been designed to fulfill the requirements of the resource manager, the NTFS file system, not the application developer. Most Web-based business applications, on the other hand, are not object driven—they are operation-based or task-based. If an application developer needs an operation-based or task-based access control model, they must create it separately. The new authorization framework in Windows Server 2003 provides a way to solve this problem. IIS 6.0 extends the use of this new tool by providing gatekeeper authorization to specific URLs. Additionally, Web applications can use URL authorization in tandem with Authorization Manager to control access from within the same policy store to URLs that are compromising a Web application, and to control application-specific tasks and operations. Maintaining the policy in the same policy store allows administrators to manage access to the URLs and application features from a single point of administration, while leveraging the store-level application groups and user-programmable business rules.<br />Constrained, Delegated Authentication<br />Delegation is the act of allowing server applications to act as the user on the network. An example of this would be a Web service application on an enterprise intranet that accesses information from various other servers in the enterprise as the client, and then presents the consolidated data over HTTP to the end user. Constrained delegation was added in Windows Server 2003 to provide domain administrators with control to allow delegation to particular computers and services only. The following are delegation recommendations:<br />• Delegation should not allow a server to connect on behalf of the client to any resource in the domain/forest. Only connections to particular services (for example, a backend SQL database or a remote file store) should be allowed. Otherwise, a malicious server administrator or application could impersonate the client and authenticate against any resource in the domain on behalf of the client.<br />• Delegation should not require the client to share its credentials with the server. If a malicious server administrator or application has your credentials, it can use them throughout the whole domain, and not just against the intended backend data store.<br />Constrained, delegated authentication is a highly desirable way to design an application suite in the Windows Server 2003 environment, because there are many opportunities to leverage high-level protocols such as Remote Procedure Call (RPC) and Distributed Component Object Model (DCOM). These protocols can be used to transparently carry the user context from server to server, impersonate the user context, and have the user context be authorized against objects as the user by the authorization rules, defined by: domain group information, local group information, and discretionary access control lists (DACL), on resources located on the server.<br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br />Manageability Enhancements<br />The typical Internet Web site no longer operates on just one server. Web sites now spread across multiple Web servers, or across Web farms. (Web farms are clusters of servers that are dedicated to delivering content, business logic, and services.) Even intranet sites have increased in number as businesses and organizations are developing and deploying more applications, especially Web-enabled, line-of-business applications. In addition, as remote administration has become more common, there has been an increasing demand to improve API access and direct configuration support. With the Internet and intranet changes over the past few years, managing a Web site is no longer as simple as managing one or a few Web servers from an office, but has become an intricate and complex process.<br />IIS 6.0 introduces new features to improve the administration of Web sites. The IIS 6.0 configuration store is now expressed as plain text XML, which allows for direct text editing of the metabase configuration in a robust and recoverable fashion, even while the server is running. Furthermore, Windows Management Instrumentation (WMI) support and improved command-line and scripting, enable programmatic Web site administration without the use of GUI-based IIS 6.0 Manager. IIS 6.0 also includes a new Web-based remote administration console called the Remote Administration Tool.<br />XML Metabase<br />The metabase is a hierarchical store of configuration values used by IIS 6.0 that incorporates rich functionality, such as inheritance, data typing, change notification, and security. The metabase configuration for IIS 4.0 and IIS 5.0 was stored in a proprietary binary file and was not easily readable or editable. IIS 6.0 replaces the proprietary binary file, called MetaBase.bin, with plain text XML formatted files. Administrators and application developers have long expressed the need to have an accessible, fast configuration store that doesn’t have a “black-box” feeling to it. The new XML metabase meets these needs by addressing performance and manageability through the features outlined below. Active Directory Service Interfaces (ADSI) schema and schema extensibility will continue to be supported. A human readable and editable schema supports ADSI and enhances human readability and ease of editing of the text format.<br /> The new XML metabase improves server manageability by enabling the following scenarios:<br />• Direct metabase configuration troubleshooting and editing in a robust fashion<br />• Reuse of rich text tools such as windiff, version control systems, and editing tools<br />• Configuration rollback<br />• Versioned history archives containing copies of the metabase for each change<br />• Web site and application configuration cloning<br />• Server-independent backup and restore<br />The new XML metabase allows administrators to easily read and edit configuration directly without having to use scripts or code to administer the Web server. The XML metabase makes it much easier to diagnose potential metabase corruption and extend existing metabase schema via XML. In addition, administrators can read and edit current metabase configuration directly to the metabase file while still being 100% compatible with existing public metabase APIs and ADSI. The binary metabase used in previous versions of IIS will automatically upgrade to the new XML metabase used in IIS 6.0.<br />Automatic Configuration Versioning and History<br />The metabase history feature automatically keeps track of changes to the metabase that are written to disk. When the metabase is written to disk, IIS 6.0 marks the new MetaBase.xml file with a version number and saves a copy of the file in the history folder. Each history file is marked with a unique version number, which is then available for rollback or restore. The metabase history feature is enabled by default.<br />Edit-While-Running Feature<br />IIS 6.0 gives administrators the important capability to change the server configuration while the server continues running, through direct edit of the MetaBase.xml file. For example, this feature can be used to add a new site, create virtual directories, or change the configuration of application pools and worker processes—all while IIS 6.0 continues to process requests. No recompilation or restart is required. The administrator can do this easily by opening the MetaBase.xml file using Notepad, create the virtual directory needed, and save the file—again, all while IIS is running. The new changes will be detected, scanned for correctness, and applied to the metabase if the changes are per schema. <br />Export and Import Configuration<br />IIS 6.0 introduces two new Admin Base Object (ABO) methods, Export() and Import(). These methods allow the configuration from any node level to be exported and imported across servers. Secure data is protected via a user-supplied password similar to the new backup/restore support. These new methods are also available to ADSI and WMI users and through IIS Manager. Using Export() and Import() administrators can complete the following tasks:<br />• Export one node or an entire tree to an XML file from any level of the metabase<br />• Optionally export inherited configuration<br />• Import one node or an entire tree from an XML file<br />• Optionally import inherited configuration<br />• Password protect secure data<br />• Optionally merge configuration during import with existing configuration<br />Server Independent Backup and Restore<br />In IIS 6.0, a new Admin Base Object (ABO) API is available for developers to back up and restore the metabase with a password. This allows administrators and developers to create server-independent backups. The session key is encrypted with an optional user-supplied password during backup and is not based on the machine key. When backing up the metabase, the system encrypts the session key with the password supplied by the user. When restoring, the supplied password decrypts the session key, and the session key is re-encrypted with the current machine key. This new restore method can also restore backups made with the old backup method, and follows the same behavior the old restore method uses when a session key cannot be decrypted. WMI and ADSI support these methods. The existing metabase backup/restore user interface also uses the new backup/restore method.<br />Metabase Auditing<br />Beginning with Windows Server 2003 Service Pack 1 (SP1), IIS 6.0 includes a metabase auditing feature that allows tracking of each change that is made to the metabase. Metabase auditing is enabled by enabling an audit access control entry (ACE) on a node in the metabase. After the ACE is enabled, whenever a metabase change takes place on that node, an audit event is published in the NT Security event log. You can also use the new /enableaudit & /disableaudit switches on IISCNFG.vbs to enable & disable auditing. Using metabase auditing, you can keep track of: <br />• What was changed (metabase node, property, and old and new values). <br />• When the change was made (date and time). <br />• Who made the change (domain and user name). <br />• Success or failure of the change attempt (HRESULT). <br />• When a change is made remotely (client IP address). <br />Note To avoid disclosing sensitive information, such as passwords, values of secure properties do not appear in audit event log entries. <br />Benefits of the IIS 6.0 XML Metabase<br />The IIS 6.0 metabase file offers improved performance scalability. The XML metabase has comparable or better disk footprint size and faster read times on Web server startup than the IIS 5.0 binary metabase, and equivalent write performance to the IIS 5.0 binary metabase. Addition XML metabase benefits are:<br />• Improved backup/restore capabilities on machines that experience critical failures<br />• Improved troubleshooting and metabase corruption recovery<br />• The metabase files can be edited directly using common text editing tools<br />• Application configuration is exportable and importable at user-specified locations<br />IIS 6.0 WMI Provider<br />Windows 2000 introduced Windows Management Instrumentation (WMI), a new means of configuring the server and gaining access to important pieces of data such as performance counters and system configuration. IIS 6.0 now has a WMI provider, giving administrators the opportunity to use WMI capabilities, like query support and the associations between objects. WMI presents a rich set of programming interfaces that offer more powerful and flexible ways to administer your Web server. The IIS 6.0 WMI provider gives similar functionality to the IIS 6.0 ADSI provider for editing the metabase.<br />The goal of the IIS 6.0 WMI provider is to allow manageability of IIS 6.0 at a level of functionality equivalent to the IIS 6.0 ADSI provider and to also support an extensible schema. This requires a WMI schema that is congruent with the IIS 6.0 metabase schema. While they may differ in terms of their object and data models, ADSI and WMI offer equivalent functionality. In other words, an administration task can be scripted using either the ADSI or the WMI model. The effect on the metabase of running the same script expressed as ADSI or WMI would be equivalent. Likewise, any schema extensions done through ADSI are reflected in the WMI provider automatically.<br />Command-Line Administration<br />IIS 6.0 now ships supported scripts in the Windows\System32 directory that can be used to administer the server. These scripts, written in the Microsoft Visual Basic® scripting language, use the IIS 6.0 WMI provider to get and set configuration information within the metabase. These scripts are designed to do many of the most common tasks facing a Web administrator from the command-line without having to use a user interface. IIS 6.0 includes the following command-line administration scripts:<br />• IISweb.vbs: create, delete, start, stop, and list Web sites<br />• IISftp.vbs: create, delete, start, stop, and list FTP sites<br />• IISvdir.vbs: create and delete virtual directories, or display the virtual directories of a given root<br />• IISftpdr.vbs: create, delete, or display virtual directories under a given root<br />• IISconfg.vbs: export and import IIS 6.0 configuration to an XML file<br />• IISback.vbs: back up and restore IIS 6.0 configuration<br />• IISapp.vbs: list process IDs and application pool IDs for currently running worker processes <br />• IISext.vbs: configure Web service extensions<br />New Web-based Administration Console<br />IIS 6.0 includes a new Web-based administration console called the Remote Administration Tool. Using the Remote Administration Tool, administrators are able to remotely administer IIS 6.0 across the Internet or the intranet through a Web browser.<br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br />Performance and Scalability Enhancements<br />A new generation of applications puts a greater demand on performance and scalability attributes of Web servers. Increasing the speed at which HTTP requests can be processed, and allowing more applications and sites to run on one server, translates directly into fewer servers needed to host a site. It also means that existing hardware investments can be sustained longer while being able to handle greater capacity.<br />Windows Server 2003 introduces a new kernel-mode driver, HTTP.sys, for HTTP parsing and caching. HTTP.sys is specifically tuned to increase Web server throughput and designed to avoid a processor transition to user mode if the content requested classifies as something that can be directly processed in the kernel. This is important to IIS users because IIS 6.0 is built on top of HTTP.sys. If a user mode component needs to get involved in the processing of a request, HTTP.sys routes the request to the appropriate user mode worker process without any other user mode process getting involved in the routing decision.<br />IIS 6.0 is also more aware of the processing environment. IIS 6.0 kernel and user mode components are written to be aware of processor locality, and do their best to maintain per-processor internal data locality. This can add to the scalability of a server on multiprocessor systems. Additionally, administrators have the ability to establish affinity between workloads for particular applications/sites and specific processor subsystems. This means that applications can set up virtual application processing silos in the one operating system image, as shown in Figure 1 below.<br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br />Figure 1. Virtual request processing silos in IIS 6.0<br />HTTP.sys—New Kernel-Mode Driver<br />The new kernel-mode driver, HTTP.sys, is a single point of contact for all incoming (server-side) HTTP requests. This provides high performance connectivity for HTTP server applications. The driver sits atop TCP/IP and receives all connection requests from the IP/port combinations on which it is configured to listen. HTTP.sys is also responsible for overall connection management, bandwidth throttling, and Web server logging.<br />Caching Policy & Thread Management<br />IIS 6.0 has advanced heuristics built in to determine the cacheable hot-set of an application or set of sites. Just because an item is cacheable, does not necessarily mean it makes sense to add that item to an in-memory cache, because there is a cost to managing the item and the memory it consumes. Therefore, IIS 6.0 uses a new heuristic to determine which items should be cached on the basis of the distribution of requests that a particular application receives. This means that the Web server’s scalability improves because it makes better use of the resources on the server while sustaining the performance on frequent requests. IIS 6.0 also has heuristics built in to monitor the overall state of the server, and makes decisions to increase/reduce concurrency on that basis. The central idea is to be efficient in using concurrency. For example, when executing processor-bound requests, starting concurrent work is not always the best approach.<br />Web Gardens<br />A Web garden is an application pool that has multiple processes serving the requests routed to that pool. You can configure the worker processes in a Web garden to be bound to a given set of CPUs on a multi-processor system. Using Web gardens, Web applications have increased scalability because a software lock in one process does not block all the requests going to an application. If there are four processes in the Web garden, a specific software lock blocks roughly a quarter of the requests.<br />Persisted ASP Template Cache<br />Before ASP code gets executed in IIS 5.0, the ASP engine compiles an ASP file to an ASP template. These ASP templates are stored in process memory. If a site consists of numerous ASP pages, this cache de-allocates the oldest templates from memory to free space for new ones. With IIS 6.0, these templates are stored on disk. If one of these ASP files gets requested again, the ASP engine loads the template instead of loading the ASP file and spending additional CPU time compiling it again.<br />Large Memory Support for x86.0<br />For workloads that require a great deal of cached data, IIS 6.0 can be configured to cache up to 64 gigabytes (GB) for an x86 system.<br />Site Scalability<br />IIS 6.0 has improved the way internal resources are used. The IIS 6.0 approach is much more one of allocating resources as HTTP requests call for certain system resources, rather than pre-allocating resources at initialization time. This has resulted in the following improvements:<br />• Many more sites that can be hosted on a single IIS 6.0 server<br />• A larger number of worker processes that can be concurrently active<br />• Quicker startup/shutdown of the server when hosting sites <br />Preliminary testing shows an order of magnitude greater number of pooled applications can be run on IIS 6.0 as compared to IIS 5.0. IIS 6.0 is capable of having thousands of isolated applications configured, and each of these applications can run with its own application pool worker process identity. Of course, the number of concurrent isolated applications is also a function of system resources. IIS 6.0 can easily have tens of thousands of configured applications per server, when applications are configured to execute in a shared application pool.<br />Reclaiming Resources for Idle Applications<br />An additional scalability improvement in the new IIS 6.0 architecture is that IIS 6.0 can idle time-out application pool worker processes that are idle for a configured time period. If there are no requests for a given application pool over a configured time period, there is no reason that application pool should still continue to take resources. Therefore, IIS 6.0 can be configured to shut down worker processes that have been idle for a configured time period. IIS 6.0 will also trim kernel cached items for these inactive sites dynamically. Coupled together with idle time-out is the ability to demand start the worker processes when there’s demand for the application pool. While there may not be a worker process running and serving the application pool, there will be one started when the first request arrives for that application pool. In this way, IIS 6.0 is able to only consume resources when there is demand for it.<br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br />Application Platform Enhancements<br />IIS 6.0 provides several new programmatic features and continues to build on the ISAPI programming model. These new features include:<br />• ASP.NET and IIS 6.0 integration<br />• Internal redirection (ExecuteURL and global interceptors)<br />• Buffer and handle send (VectorSend), including the ability to mark a response as cacheable in the HTTP.sys kernel cache<br />• Caching dynamic content: ASP.NET responses can be marked as cacheable in the HTTP.sys kernel cache; other ISAPI extensions can use the new VectorSend server support function (HSE_REQ_VECTOR_SEND) to mark responses as cacheable in HTTP.sys as well<br />• ISAPI support for custom errors<br />• Worker process recycling<br />• Improved ISAPI Unicode support<br />• New COM+ services in ASP<br />ASP.NET and IIS 6.0 Integration and Variety of Language Choices <br />Windows Server 2003 offers an improved developer experience with ASP.NET and IIS 6.0 integration. Building upon IIS 6.0, platform enhancements offer developers very high levels of functionality—for example, rapid application development and a wide variety of languages to choose from. With Windows Server 2003, the experience of using ASP.NET and the .NET Framework is improved as a result of enhanced process model integration in IIS 6.0. IIS 6.0 offers support for the latest Web standards, including XML, SOAP and IPv6. <br />ExecuteURL<br />The HSE_REQ_EXEC_URL server support function now allows an ISAPI extension to easily redirect a request to another URL. It answers growing demand by ISAPI extension developers to “chain” together requests.<br />Replacing Read Raw Data Filters<br />ExecuteURL provides functionality to replace almost all read raw data filters. The most common customer scenario for developing read raw data filters is that they want to examine or modify the request (headers or entity body) before the target URL processes it. Currently, the only way to see the entity body of a request (if you are not the target URL) is through read raw data notifications. Unfortunately, writing an ISAPI filter to accomplish this goal can be exceedingly difficult, or even impossible, in some configurations. ISAPI extensions, on the other hand, provide functionality for easy retrieval and manipulation of the entity body. ExecuteURL allows an ISAPI extension to process the request entity body and pass it to a child request, meeting the needs of nearly all read raw data filter developers.<br />Global Interceptors<br />ExecuteURL allows IIS 6.0 to implement ISAPI request interceptors that can intercept, change, redirect, or deny every incoming HTTP request for a specific URL space.<br />• IIS 5.0 already supports one ISAPI extension that intercepts all requests with a single wildcard (*) script map that is configured by editing the application mappings for an application.<br />• In IIS 6.0, the single wildcard (*) script map concept is extended to allow a multiple execution of global interceptors.<br />ISAPI Filters<br />Accepting all requests for a specific URL was a functionality that was only possible in ISAPI filters in previous versions of IIS. ISAPI filters have the following problems:<br />• ISAPI filters are global for a Web site.<br />• They can’t do long running operations (for example, database queries) without starving the IIS 6.0 thread pool.<br />• They can’t access the entity body of the request unless they are read raw data filters.<br />Because global interceptors are ISAPI extensions, they don’t have the limitations of ISAPI filters and they provide the functionality, together with ExecuteURL, to replace almost all read raw data filters.<br />VectorSend<br />Today, ISAPI developers have only two possibilities if they have multiple buffers that make up a response. They can either call WriteClient multiple times, or they can assemble the response in one big buffer.<br />• The first approach is a performance bottleneck, because there is one kernel-mode transition per buffer.<br />• The second approach costs performance too, but also needs additional memory. <br />VectorSend is the IIS 6.0 solution to these problems. Implemented as a server support function for ISAPIs, VectorSend allows developers to put together a list of buffers and file handles to send, in order, and then hand off to IIS 6.0 to compile the final response. HTTP.sys compiles all the buffers and/or file handles into one response buffer within the kernel and then sends it. This frees the ISAPI from having to do any of this buffer construction or multiple write clients.<br />Caching of Dynamic Content<br />Another new feature is the implementation of a kernel-mode cache for dynamic content. The benefit to this feature is that many customers have programmatically created content that doesn’t change. In previous versions of IIS 6.0, requests had to transition from kernel-mode to user mode for every dynamic request, and the responses had to be regenerated. Eliminating this transition and pulling the cached content from the kernel-mode cache results in a marked performance improvement.<br />ReportUnhealthy<br />A new ISAPI extension server support function called HSE_REQ_REPORT_UNHEALTHY allows an ISAPI extension to call into the IIS 6.0 worker process to request that worker process be recycled. Developers can use this new server support function to request a recycle if their application ISAPI becomes unstable, or enters an unknown state for any given reason.<br />Note In order to enable recycling after an ISAPI calls HSE_REQ_REPORT_UNHEALTHY, health monitoring should be turned on.<br />The developer can also pass in a string representing the reason why the ISAPI is calling HSE_REQ_REPORT_UNHEALTHY. This string is then added to the event the worker process publishes to the Application event log.<br />Custom Errors<br />ISAPI developers no longer need to generate their own error messages. Instead, they can plug into the custom error support built into IIS 6.0 through a new server support function called HSE_REQ_SEND_CUSTOM_ERROR.<br />Unicode ISAPI<br />Unicode becomes more and more important in a global economy. Due to the non-Unicode structure of the HTTP protocol, IIS 5.0 limits the developer to the system code page. With UTF-8 encoded URLs, Unicode becomes possible. IIS 6.0 allows customers to get to server variables in Unicode and adds two new server support functions to allow developers to get to the Unicode representation of a URL. International customers with multi-language sites benefit from this feature and improved development experience.<br />New COM+ Services in ASP<br />In IIS 4.0 and 5.0, ASP applications are able to use COM+ services by configuring the application’s WAM object in the COM+ configuration store to use a set of services. This was due to the fact that COM+ services were developed to be used in conjunction with COM components.<br />The IIS 6.0 and COM+ teams have separated the COM+ services from components and allow ASP applications to use a set of COM+ services in IIS 6.0. In addition to those services available in COM+ on Windows 2000, a few new services have been added and are supported in ASP.<br />Fusion Support<br />Fusion allows an ASP application to use a specified version of a system runtime DLL or classic COM component. Fusion allows an application developer to specify exact versions of system run-time libraries and classic COM components that work with their application. When the application is loaded and running, it will always receive these versions of the run-time libraries and COM components. Previously, applications had to use whatever version of the system runtime DLL that was installed on the system. This could present problems if a newer version is installed and has changed functionality in some way.<br />Partition Support<br />COM+ partitions allow an administrator to define a different configuration of a single COM+ application for different users. This configuration includes security and versioning information. For more information about COM+ partitions, consult the COM+ documentation.<br />Tracker Support<br />When enabled, the COM+ tracker allows administrators to monitor what code is running within the ASP session and when. This information is extremely helpful to debug ASP applications. For more information about the COM+ tracker, consult the COM+ documentation<br />Apartment Model Selection<br />ASP, through COM+, allows developers to determine which threading model to use when executing the pages in an application. By default, ASP uses the Single Threaded Apartment. However, if the application uses poolable objects, it can be run in the Multi-Threaded Apartment.<br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br />Platform Improvements<br />In addition to the features described above, IIS 6.0 has made a number of improvements to the platform overall. These features make IIS 6.0 a more compelling Web application platform.<br />64-bit Support<br />The complete Windows Server 2003 family code base is compiled for 32-bit and 64-bit platforms. Customers who demand highly scalable applications can take advantage of an operating system that runs and is supported on these two platforms. In addition, Windows Server 2003, Service Pack 1 introduces a compatibility layer that enables you to run 32-bit Web applications on 64-bit Windows:<br />Running 32-bit Applications on 64-bit Windows <br /> <br />Windows Server 2003, Service Pack 1 introduces a compatibility layer, known as Windows-32-on-Windows-64 (WOW64), that is intended to run 32-bit personal productivity applications needed by software developers and administrators, including 32-bit Internet Information Services (IIS) Web applications. <br />On 64-bit Windows, 32-bit processes cannot load 64-bit DLLs, and 64-bit processes cannot load 32-bit DLLs. If you plan to run 32-bit applications on 64-bit Windows, you must configure IIS to create 32-bit worker processes. Once you have configured IIS to create 32-bit worker processes, you can run the following types of IIS applications on 64-bit Windows: <br />• Internet Server API (ISAPI) extensions<br />• ISAPI filters<br />• Active Server Page (ASP) applications (specifically, scripts calling COM objects where the COM object can be 32-bit or 64-bit)<br />• ASP.NET applications<br /><br />IIS can, by default, launch Common Gateway Interface (CGI) applications on 64-bit Windows, because CGI applications run in a separate process. <br />IPv6.0 Support<br />Internet Protocol version 6.0 (IPv6.0) is the next generation IP protocol for the Internet. The Windows Server 2003 family now implements a production-ready IPv6.0 stack. On servers where the IPv6.0 protocol stack is installed, IIS 6.0 will automatically support handling HTTP requests that arrive over IPv6.0.<br />Granular Compression<br />On a congested network it is useful to compress responses. In IIS 5.0, compression was an ISAPI filter and could only be enabled for the whole server. IIS 6.0 allows a much more granular configuration (file level).<br />Resource Accounting and Quality-of-Service (QoS)<br />Quality-of-Service (QoS) ensures that particular components of the Web server, or specific content served by that server, don’t take over all server resources, like memory or CPU cycles. Administrators can control the resources being used by particular sites, application pools, the WWW service as a whole, and others.<br />Basically, QoS ensures a certain quality of service that other services, sites, or applications on the system receive. It does this by limiting the resources consumed by particular Web sites or applications, and/or by the WWW service itself.<br />In IIS 6.0, QoS takes the form of the following features:<br />• Connection limits<br />• Connection time-outs<br />• Application pool queue length limits<br />• Bandwidth throttling<br />• Process accounting<br />• Memory-based recycling (see above)<br />Tracing Improvements:<br />The Windows operating system includes the Event Tracing for Windows (ETW) infrastructure to help individuals troubleshoot problems in the operating system, including those involving HTTP requests in IIS components. IIS HTTP components include Active Server Pages (ASP), Internet Server Application Programming Interface (ISAPI) extensions, and the Secure Sockets Layer (SSL) Filter service, to name a few. If an HTTP request in IIS fails or becomes unresponsive while ETW is enabled, you can view ETW trace data, called events, to determine which component caused the failure.<br />In Windows Server 2003 Service Pack 1 or later, IIS includes the following tracing features. These features leverage the ETW infrastructure.<br />• IIS Currently-executing Requests Tracing: This tracing feature provides general statistics and details about all requests executing on the server at the moment tracing was started. If the CPU on your server is spiking or if requests become unresponsive, currently-executing requests tracing can help you understand which URLs are being requested, which application pool the requests reside in, and other similar details. This feature does not include the option to specify which components or URLs to trace.<br />• IIS Request-Based Tracing: This tracing feature tracks HTTP requests as they move through IIS components. Request-based tracing can help you target the IIS component processing a request when the request failed or became unresponsive. Request-based tracing is enabled and disabled via the command line. Request-based tracing allows you to define and trace a specific URL or a group of URLs. <br />Windows Server 2003 Service Pack 1 or later also includes a provider for tracing the IIS Admin service during startup and shutdown. The IIS Admin service provides access to the in-memory configuration store and other dependent services. If IIS Admin hangs on startup or shutdown, which could be due to a bad client that has registered for change notifications, and IIS Admin is waiting to cancel that notification so it can shutdown, IIS can become unresponsive. If you experience problems while IIS is starting up or shutting down, the IIS Admin Service provider can help you understand the nature of the problem.<br />Logging Improvements<br />Logging improvements in IIS 6.0 address international usage, large site, and troubleshooting scenarios by adding the following features:<br />UTF-8 Logging Support<br />With additional Unicode and UTF-8 support, IIS 6.0 now supports writing log files in UTF-8 instead of just ASCII (or local code page). This setting, configurable on the WWW service level, tells HTTP.sys how to write out the log files—in UTF-8 or in the local code page.<br />Binary Logging<br />Binary logging allows multiple sites to write to a single log file in a binary, non-formatted manner. This new logging format will offer improved performance over current text-based [World Wide Web Consortium (W3C), IIS 6.0, and National Center for Supercomputing Applications (NCSA)] logging formats since the data doesn’t have to be formatted in any specific manner. Additionally, binary logging offers scalability benefits due to the dramatic reduction in the number of log file buffers needed to log for tens of thousands of sites. When enabled, all sites will log to the same one binary log file. Tools can then be used to post-process the log file to extract the log entries. Even home-grown tools can be written to process binary log files, because the format of the log entries and file will be published.<br />Logging of HTTP Substatus Codes<br />IIS 6.0 also supports the ability to log HTTP sub status codes in W3C and binary logging formats. Sub status codes are often helpful in debugging or troubleshooting, because IIS 6.0 returns specific sub status codes for specific types of problems. For example, if a request cannot be served because the application needed has not been unlocked (like ASP by default on clean installations), the client will get a generic 404. IIS 6.0 actually generates a 404.2, which will now be logged to W3C and Binary log files.<br />W3C Centralized Logging<br />W3C centralized logging is a global configuration on the server where all Web sites write data to a single log file. Data is stored in the log file using the W3C Extended log file format. The log file can be viewed in a text editor, unlike IIS Centralized Binary Logging which writes data in binary format and requires a parsing tool to view the data. W3C centralized logging is available in Windows Server 2003 Service Pack 1 or later. By default, IIS Web sites write data to individual log files (one log file per Web site). On servers hosting large numbers of sites (hundreds or thousands), the process of maintaining large numbers of open file handles to log files can negatively impact how the server scales. W3C centralized logging improves server scalability on servers hosting large numbers of sites because IIS requires only one open file handle for logging. <br />W3C centralized logging is a server property, not a site property, so when you enable this feature all Web sites on that server are configured to write log data to the central log file. <br />Note W3C centralized logging uses the W3C Extended log format, which includes the following four fields: HostHeader, Cookie, UserAgent, and Referrer. <br />File Transfer Protocol (FTP)<br />The following FTP improvements have also been made in IIS 6.0.<br />FTP User Isolation<br />Traditionally, ISP/ASP customers have used FTP to upload their Web content because of its easy availability and wide adoption. IIS 6.0 allows the isolation of users into their own directory, thus preventing users from viewing or overwriting other users' Web content. The user’s top-level directory appears as the root of the FTP service, thus restricting access by disallowing further navigation up the directory tree. Within the user’s specific site, the user has the ability to create, modify, or delete files and folders. The FTP implementation is architected across an arbitrary number of front-end and back-end servers, which increases reliability and availability. FTP can be easily scaled, based on the addition of virtual directories and servers, without impacting the end users.<br />Configurable PASV Port Range<br />PASV FTP, or passive mode FTP, requires the server to open a data port for the client to make a second connection. This is a separate connection than the typical port 21 that is used for the FTP control channel. The port range used for PASV connections is now configurable with IIS 6.0. This feature can reduce the attack surface of IIS 6.0 FTP servers by allowing administrators to have more granular control over the port ranges that are exposed over the Internet.<br />Improved Patch Management<br />Windows Server 2003 has greatly improved patch management by offering the following new features:<br />• No service interruption while installing patches. The new IIS 6.0 architecture includes worker process recycling, which means an administrator can easily install most IIS 6.0 hot fixes and most new worker process DLLs without any interruption of service.<br />• Auto Update. Auto Update version 1.0 provides three options:<br /> Notify of patch availability the moment the patch is available<br /> Download the patch and notify of its availability<br /> Scheduled install (This option enables the patch to be downloaded and automatically installed at a time decided by the administrator.)<br />• Windows Update Corporate Edition. Many IT departments do not allow users to install security patches and other Windows Update packages without first testing them in a standard operating environment. Corporate Windows Update now enables users to run quality assurance tests on patches required by the organization. Once patches have passed the specified tests, they can be placed on the Corporate Windows Update server, behind the firewall, where all machines inside the firewall can then pick up the patch.<br />• Resource Free DLLs. Windows Server 2003 has now separated localization resources from the actual implementation. This has improved Microsoft’s ability to quickly design fixes for 30 languages.Rajesh Babu Rajamanickamhttp://www.blogger.com/profile/16036650026261051481noreply@blogger.com0tag:blogger.com,1999:blog-8180223062941193498.post-6826844725489253402008-01-14T16:45:00.000-08:002008-01-14T16:46:05.606-08:00Microsoft Windows Operating Systems<span style="font-weight:bold;">Microsoft Windows Operating Systems</span><br /><br />Contents<br /><br /><br />Windows NT Workstation<br /><br /><br />Windows 98<br /><br /><br />Windows 2000<br /><br /><br />Windows Server 2003<br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br />Microsoft Windows NT<br /><br /><br /><br /><br /><br /><br />Contents<br /><br />1.0 Introduction Windows NT Workstation 4.0<br /><br />2.0 What’s New in Windows NT Workstation 4.0<br /><br />3.0 Beginning Installation<br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br />1.0 Introducing Windows NT Workstation 4.0<br /><br /><br />Welcome to Microsoft® Windows NT® Workstation 4.0, the most powerful operating system for business computing. Windows NT Workstation combines the ease-of-use of Windows® 95 with the power and reliability of Windows NT.<br /><br />The new Windows 95 interface makes it easier and faster for you to do your work. New features such as Microsoft Windows NT Explorer make finding and storing files easier than ever, and the new icons and screen design keep your workstation organized and your programs accessible.<br /><br />These are the major ways in which Windows NT Workstation 4.0 is designed to meet the demanding computing needs of today’s business world. <br /><br />1.0.1 Ease of Use, Productivity, and Compatibility <br /><br />Windows NT Workstation 4.0 has the Windows 95 easy-to-use interface which helps you do your work easier and faster. Windows NT Workstation 4.0 ensures high performance for 32-bit programs. All Win16 Windows-based programs have the preemptive multitasking capabilities of Windows NT Workstation 4.0 and can be run in a separate address space for better responsiveness and reliability.<br /><br />1.0.2 System Reliability and Data Protection<br /><br />Windows NT Workstation 4.0 meets the reliability standards required by management information systems (MIS) professionals and other power users to run critical line-of-business programs. Windows NT Workstation 4.0 protects application programs from one another. <br /><br />1.0.3 Workgroup and Networking Support<br /><br />Built-in file-sharing and print-sharing capabilities make it easy to use Windows NT Workstation for workgroup computing. Windows NT Workstation 4.0 has an open network system interface that is compatible with Banyan VINES, NetWare, Novell, UNIX, Macintosh, and LAN Manager 2.x, as well as Microsoft Windows for Workgroups, Windows 95, and standard x86 environments. Up to 10 simultaneous connections can be made to a Windows NT Workstation 4.0 computer for sharing files and printers. <br /><br />1.0.4 Object Linking and Embedding<br />In Windows NT Workstation, you can combine information from several applications into one compound document using the special object linking and embedding (OLE) capabilities of Windows-based applications. For example, you can create a compound document that includes formatted text, graphics, and information from a spreadsheet or a database, plus icons that run sound recordings or play multimedia devices. You can edit the information without knowing which application was used to create it.<br /><br />The applications included with Windows NT that have OLE capabilities are Windows Messaging, Clipbook Viewer, Paintbrush, Sound Recorder, and WordPad. <br /><br />1.0.5 Built-in Tools for Internetworking and Intranetworking<br /><br />With built-in TCP/IP, Microsoft Internet Explorer, and Microsoft Peer Web Services, you have all the tools and information needed to browse the Internet and publish information to corporate intranets.<br /><br /><br /><br />2.0 What’s New in Windows NT Workstation 4.0<br /><br /><br />Here are the new features that you will find in Windows NT Workstation 4.0:<br /><br />2.0.1 Windows 95 User Interface<br /><br />Windows NT Workstation 4.0 includes the new Windows 95 user interface, making the operating system even easier to use. Additional new features include Windows NT Explorer, My Briefcase, Recycle Bin, and Network Neighborhood.<br /><br />2.0.2 Telephony API (TAPI) and Unimodem<br /><br />Windows NT Workstation 4.0 provides the technologies required by fax applications, Windows Messaging (for electronic mail), and Microsoft Internet Explorer.<br /><br />2.0.3 NetWare 4 Client and Logon Script Support <br /><br />The NDS client for Windows NT Workstation 4.0 provides NetWare login script support and file/print capabilities. However, VLM support is not included.<br /><br />2.0.4 Peer Web Services <br /><br />Microsoft Peer Web Services for Windows NT Workstation is designed for personal Web publishing from computers running Windows NT Workstation. With Peer Web Services, you can set up a personal Web server to run on your company’s intranet, which is ideal for development, testing, and peer-to-peer publishing.<br /><br />2.0.5 Microsoft Internet Explorer<br /><br />Use Microsoft Internet Explorer to easily navigate and access information on the Web. With Microsoft Internet Explorer, you can browse Macintosh, NetWare, and Windows Web sites without changing formats.<br /><br />2.0.6 Distributed Applications for the Internet<br /><br />In addition to using component object model (COM) to integrate applications on a single computer, you can now use Distributed Component Object Model (DCOM) to integrate client/server applications across multiple computers. DCOM can be used to integrate robust Web browser applications. DCOM provides the infrastructure for client/server applications that can share components across the Internet or intranet.<br /><br />2.0.7 Windows Messaging<br /><br />Windows NT Workstation 4.0 includes Windows Messaging for managing e-mail (including e-mail over the Internet).<br /><br />2.0.8 Direct Draw and Direct Sound Support<br /><br />Includes the APIs necessary to develop and run games and other applications for Windows 95.<br /><br /><br /><br /><br /><br />3.0 Beginning Installation<br /><br />This section describes Windows NT Setup, the program used to install Windows NT on your computer. Installing a new operating system can involve many choices, and Setup is designed to guide you through these choices as smoothly as possible.<br /><br />Installing Windows NT consists of three main steps:<br /><br />1. Preparing to Run Setup<br />Check all hardware against the Windows NT Hardware Compatibility List as well as the System Requirement table. In addition, make sure you have all necessary materials at hand for your installation. Use the worksheet included in this book to organize the information resource you need.<br /><br />2. Running Setup<br />Start Setup according to the instruction for your computer. Then follow all instructions on your screen, trying in the necessarily information as setup asks you to do so. During this phase, setup restart s your computer as needed in order to copy and process the setup files.<br /><br />3. Finishing Setup and Starting Windows NT<br />After you have given setup all the information it needs, it fully installs the operating system and then restarts your computer. Windows NT workstations are now ready to use.<br /><br /><br />3.0.1 What should you know before Setup<br /><br />Use the following checklist to organize your information before running Setup.<br /><br /> <br /><br /><br /><br /><br /> <br /><br /> <br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /> <br /><br /> <br /><br /> <br /><br /> <br /> Have you read the Windows NT Workstation readme files?<br /> If possible, read the file Setup.txt on your compact disc for late-breaking information pertaining to hardware and configuration. After you finish installing, read the file Readme.doc for any new information not included in this book.<br /><br />If possible, have you backed up all of the files currently on your computer to <br />either a network share or a tape storage device?<br /><br />Have you checked all of your hardware (network adapter cards, video drivers, sound cards, CD-ROM drives, etc.) against the Windows NT Hardware Compatibility List? A copy of this list is included in your package.<br /><br />Important Microsoft only supports hardware that appears on the Windows NT Hardware Compatibility List for use with Windows NT. If any piece of your hardware does not appear on this list, your installation might not be successful.<br /><br /><br />Do you have all the device driver disks and configuration settings for your third-party hardware?<br /><br />Do you have ready a formatted disk for the Emergency Repair Disk (ERD)?<br />Make sure to use a 3.5-inch 1.44 megabyte (MB) disk for the ERD. Label it “Emergency Repair Disk” and set it aside until Setup asks you to insert it.<br /><br />Note Although the ERD is optional for running Windows NT, Microsoft strongly recommends that you create one during installation and update it every time you make changes to your configuration, such as restructuring partitions, adding new disk controllers and other software, or installing new applications. <br /><br />Do you have your Windows NT Workstation compact disc?<br />– or –<br />Do you have network access to the Windows NT Workstation files?<br /><br />3.0.2 System Requirements<br />The following table describes the system requirements for Windows NT Workstation.<br />Category Requirement<br /><br />Hardware 32-bit x86-based microprocessor (such as Intel 80486/25 or higher), Intel Pentium, or supported RISC-based microprocessor such as the MIPS R4x00™, Digital Alpha Systems, or PowerPC™.<br /> VGA, or higher resolution, monitor<br /> One or more hard disks, with 117 MB minimum free disk space on the partition that will contain the Windows NT Workstation system files (148 MB minimum for RISC-based computers)<br /> For x86-based computers, a high density 3.5-inch disk drive plus a CD ROM drive (for computers with only a 5.25-inch drive, you can only install Windows NT Workstation over the network)<br /> For any computer not installing over a network, a CD-ROM drive<br />Memory 12-MB RAM minimum for x86-based systems; 16 MB recommended<br /> 16-MB RAM minimum for RISC-based systems<br />Optional components Mouse or other pointing device<br /> One or more network adapter cards, if you want to use Windows NT Workstation with a network<br /><br />Windows NT Workstation supports computers with up to two microprocessors. Support for additional microprocessors is available from your computer manufacturer.<br /><br />3.0.3 Starting Setup<br /><br />The procedure for starting Setup varies slightly according to:<br />• Your computer platform (Intel x86-based or RISC-based)<br />• How you gain access to the Setup files (from the boot media or over a network)<br />The procedures described here pertain to both Intel x86-based and RISC-based computers. If your computer is RISC-based, notice the special instructions in some of the steps.<br /><br />Note: If you are installing Windows NT on a portable computer with a Personal Computer Memory Card International Association (PCMCIA) port and you want Setup to configure a device connected to that port, you must insert the device and start or restart your computer before running Setup. Make sure that any device you use is approved on the Windows NT Hardware Compatibility List. For ways of finding this list, see “What You Should Know Before Running Setup” earlier in this chapter.<br /><br />The Setup disks included with your package (labeled “Setup Boot Disk,” “Setup Disk 2,” and “Setup Disk 3”) are required if you are installing Windows NT for the first time on an Intel x86-based computer. If you are installing over a network and do not have your package at hand, the Setup disks are created during Setup when you use the winnt or winnt32 command. Also, the Setup disks let you start Windows NT at a later time when it might not be able to start on its own due to a system error. You can use the Setup disks together with the Emergency Repair Disk, as described in Help, to recover your system when it is unable to start.<br /><br />If your computer’s BIOS supports the El Torito Bootable CD-ROM (no-emulation mode) format, you can skip over using the Setup disks during a new installation of Windows NT 4.0 and start Setup directly from the Windows NT Workstation compact disc.<br />If you are installing on a RISC-based computer, this is the appropriate method for starting Setup as well. Check the documentation for your computer to learn whether this option is available to you.<br /><br />3.0.4 To install Windows NT Workstation on your computer using the Setup disks and/or the Windows NT Workstation compact disc<br /> 1. With your computer turned off, insert the disk labeled “Windows NT Setup Boot Disk” into drive A of your computer.<br />Or, if your computer’s BIOS supports the El Torito Bootable CD-ROM (no-emulation mode) format, insert the Windows NT Workstation 4.0 compact disc with your computer turned off. <br /> 2. Turn on your computer.<br />If you are installing on an Intel x86-based computer, Setup will start automatically.<br />If you are installing on a RISC-based computer, follow these additional steps:<br /> 3. At the ARC screen, choose Run A Program from the menu.<br /> 4. At the prompt, type cd:\system\setupldr and press ENTER, where system is the directory name matching your system type: MIPS, PPC (for PowerPC computers), or ALPHA.<br />For some RISC-based computers, you might need to supply a full device name instead of typing cd:. See your computer documentation for more information.<br /><br />Once Setup is started, follow the instructions on the screen. Refer to the appropriate sections in this book when you need assistance.<br /><br />3.0.5 To install Windows NT Workstation 4.0 using a network connection to the Setup files on a remote server<br /> 1. Using your existing operating system or a MS-DOS disk, establish your connection to the share containing the Setup files.<br /> 2. If your computer is currently running a previous version of Windows NT, type winnt32 at the command prompt. For all other installations, type winnt.<br />Setup begins with a brief welcoming screen asking you the process by which you want to proceed with installation. If you are installing Windows NT Workstation 4.0 on your machine for the first time, press ENTER to begin the Setup process.<br /><br />On this and the other opening Setup screens, Help is available by pressing F1. These Help screens contain useful background information and suggestions to follow while running Setup.<br />If you are continuing an earlier failed attempt to install Windows NT, certain repair options are available by pressing R. For guidance in using these screens, refer to the available Help by pressing F1.<br /><br />You can cancel Setup entirely at any point on these screens by pressing F3.<br /><br />3.0.6 Configuring a Mass Storage Device<br /><br />Next, Setup scans your computer to detect the mass storage devices, such as CD ROM drives and SCSI adapters. Hard disks are not included in this scan.<br /><br />Note<br />Setup automatically detects all integrated device electronics (IDE) and enhanced small device interface (ESDI) drives. These drives are not displayed on this screen.<br /><br /> <br /><br />Setup lists all the mass storage devices it finds. You can accept this list, or you can choose to add to it if you have a disk with device drivers from the manufacturer of your device. You can also wait and install additional mass storage devices after Setup is complete.<br /><br />If any of your mass storage devices were not detected, press S to install them at this time.<br /><br />3.0.7 Verifying Your Hardware<br /><br />Next, Setup displays the list of hardware and software components it finds on your computer.<br /><br /> <br /><br />Use the UP ARROW and DOWN ARROW keys to move to a setting on the list that needs to be changed. Then, press ENTER to see alternatives for that item.<br />Configuring the Disk Partitions<br /><br />Disk space on your hard drive(s) is divided into usable areas called partitions. Before it can install Windows NT, Setup must know the appropriate disk partition for installing the system files.<br /> <br /><br />A disk partition can be any size from 1 MB to the entire hard disk. But the partition where you store Windows NT files must be on a permanent hard disk and must have enough unused disk space to hold all the files. Refer to the section “System Requirements” earlier in this chapter to double-check that your computer has adequate disk space for installing the Windows NT files. <br />The system partition is the partition that has the hardware-specific files needed to load Windows NT. On an x86-based computer, Windows NT looks for certain files in the root directory of drive C (Disk 0) when you start your computer. This partition must be formatted with either the NT File System (NTFS) or the File Allocation (FAT) file system in order for Windows NT to start. It must be formatted with the FAT file system if you want to run both Windows NT and MS DOS or if you are dual-booting with Windows 95. For more information, see the next section, “Choosing a File System for the Windows NT Partition.”<br /><br />If you will use only the Windows NT Workstation operating system:<br /><br />• On a new x86-based computer, make a single partition and format it with NTFS, as described in the following section, “Choosing a File System for the Windows NT Partition.” <br /><br />• On an existing system containing files you want to keep, maintain all existing partitions. You can install the Windows NT Workstation files on any partition with sufficient free space: 117 MB for x 86-based machines or 148 MB for RISC-based computers.<br /><br />If you plan to use another operating system, such as MS DOS or Windows 95, in addition to Windows NT:<br /><br />• To run both MS DOS and Windows NT on the same computer, you must first install MS DOS. Installing it later might overwrite the boot sector on the hard disk, making it impossible to start Windows NT without using the Emergency Repair Disk.<br />• Make sure the system partition (for example, drive C) is formatted as FAT. For example, if you already have MS DOS installed and want to keep it, preserve the system partition and keep the file system as FAT, as described in the following section, “Choosing a File System for the Windows NT Partition.” You can install the Windows NT files on any uncompressed partition with sufficient free space.<br /><br />Important<br />You cannot install Windows NT on a compressed drive created with any utility other than NTFS compression.<br /><br /><br />• To use NTFS and have access to another operating system, you must have at least two disk partitions. Format drives C with a file system that Windows NT and your other operating system can use, such as FAT. Format the other partition for NTFS. You can place the Windows NT files on any uncompressed (or NTFS-compressed) partition with sufficient free space.<br /><br />If you are installing Windows NT on a computer currently configured to start either OS/2 or MS DOS using the boot command, Windows NT Setup sets up your system so that you can run Windows NT or whichever of the two operating systems (MS DOS or OS/2) you last started before running Windows NT Setup. <br /><br />If you have OS/2 Boot Manager installed on your computer and want to continue to use it after Windows NT Workstation installation is complete, you need to re-enable it. After Setup is complete, click the Start button and point to Programs and then Administrative Tools. Click Disk Administrator. Select the OS/2 Boot Manager partition, and then select Mark Active from the Partition menu.<br /><br />3.0.8 Choosing a File System for the Windows NT Partition<br /><br />Once you have selected a partition for installing Windows NT, you must instruct Setup which file system, NTFS or FAT, to use with the partition. Make sure you know all the considerations when choosing one file system over another.<br /><br /> <br /><br />Use the following information when choosing to format or convert the partition where the Windows NT files will be installed:<br /><br />• For an unformatted partition, you can choose to format it with either the NTFS or FAT file system. Choose the FAT option if you want to access files on that partition when running Windows NT, MS DOS, Windows 95, or OS/2 on this computer. Choose the NTFS option if you want to take advantage of the features in NTFS.<br /><br />• For an existing partition, the default option keeps the current file system intact, preserving all existing files on that partition.<br /><br />You might choose to convert an existing partition to NTFS so as to make use of Windows NT security. This option preserves existing files, but only Windows NT has access to files on that partition.<br /><br />Or, you might instead choose to reformat an existing partition to either the NTFS or FAT file system, which erases all existing files on that partition. If you choose to reformat the partition as NTFS, only Windows NT will have access to files created on that partition.<br /><br />Note<br />After running Setup, you can convert file systems from FAT to NTFS. If you want to convert an NTFS partition to FAT, you must first back up all the files, reformat the partition (which erases all files), and then restore the files from the backup version. You must also back up data before repartitioning a hard disk. <br /><br />The following table summarizes the main criteria for choosing a file system for a Windows NT partition.<br /><br />Windows NT File Systems<br /><br /> NTFS Considerations FAT Considerations<br /><br />Security Supports complete Windows NT security, so you can specify who is allowed various kinds of access to a file or directory. Files are not protected by the security features of Windows NT.<br />Activity log Keeps a log of activities to restore the disk in the event of power failure or other problems. FAT file systems do not keep a log.<br />File sizes Maximum file size is 4 GB to 64 GB, depending on the size of your clusters. Maximum file size is 4 GB.<br />File compression Supports flexible per-file compression. File compression is not supported.<br />Operating system compatibility Recognized only by Windows NT. When the computer is running another operating system (such as MS DOS or OS/2), that operating system cannot access files on an NTFS partition on the same computer. Allows access to files when your computer is running another operating system, such as MS DOS or OS/2.<br />MS DOS data sharing Cannot share data with MS DOS on the same partition. Enables you to share data with MS DOS on the same partition.<br /><br /><br />3.0.9 Choosing a Directory for the Windows NT Workstation Files <br /><br />After Setup accepts your partition and file system choices, it displays the name of the directory where it will install the Windows NT files. You can accept the directory that Setup suggests or type the name of the directory you prefer. For most installations, the proposed directory is appropriate.<br /><br /> <br /><br />Setup displays a special screen if it detects one or more of the following operating systems on your computer:<br /><br />• Windows NT (versions 3.1, 3.5, or 3.51)<br /><br />• Windows 95<br /><br />• Windows 3.x<br /><br /><br />In such a case, your decision to install in the directory Setup has chosen or to specify a new directory should be based on the following considerations:<br /><br />• Do you want Setup to migrate the registry settings from your existing operating system?<br /><br />• Do you want the ability to choose among your operating systems every time you start your computer?<br /><br />Note<br />If your computer is running Windows 95, it is not possible to install the Windows NT 4.0 files in the same directory. You must specify a new directory. Your Windows 95 settings will not be migrated, and you will need to reinstall your applications under Windows NT.<br /><br /><br />---------------------------------------------------***-----------------------------------------------------<br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br />Microsoft Windows 98 Se<br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br />CONTENTS<br /><br />1. QUICK TIPS FOR AN ERROR-FREE SETUP<br /><br />2. GENERAL SETUP ISSUES<br /><br />3. INSTALLING WINDOWS 98 SECOND EDITION FROM MS-DOS<br /><br />4. PERFORMING A CLEAN BOOT <br /><br />5. ANTIVIRUS SOFTWARE<br /><br />6. FINDING HARD DISK PROBLEMS DURING SETUP USING SCANDISK<br /><br />7. CAB ERRORS DURING SETUP<br /><br />8. REMOVING WINDOWS 98 SECOND EDITION<br /><br />9. POTENTIAL ISSUES IF YOU HAVE A COMPRESSED DRIVE<br /><br />10. INSTALLING WINDOWS 98 SECOND EDITION WITH WINDOWS NT <br /><br />11. SETUP ERROR MESSAGES<br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br />1. QUICK TIPS FOR AN ERROR-FREE SETUP<br />============================================<br /><br />Disable all antivirus programs running on your system. If these utilities are left running during Setup, your system may stop responding. <br /><br />NOTE: Some systems have antivirus capabilities built into the system. If this option is left enabled in BIOS/CMOS settings, you may receive a warning about "virus-like activity" or "Master Boot Record" changes. You must allow these changes to take place for Setup to complete successfully. See your antivirus software documentation for more information.<br /><br />Run ScanDisk to check and fix any problems with your hard disk(s).<br /><br />Close all running programs. This includes disabling any screen savers, Advanced Power Management settings, and other programs that may cause Setup to stop responding. See "Performing a Clean Boot" for more information.<br /><br /><br />2. GENERAL SETUP ISSUES<br />============================<br /><br />If you have the Number Nine Imagine 128 Display Adapter, or the STB Velocity 128 3D AGP (Nvidia Riva 128), you should run Setup from MS-DOS or change your display driver to VGA.<br /><br /><br />2.0.1 Upgrade vs. Full install versions of Windows 98 Second Edition <br /><br />If you have the Upgrade version of Windows 98, Setup will attempt to find a qualifying upgrade product on your system. If Setup fails to find a previous version of Windows, you will be prompted <br />to insert your previous media for proof of compliance. <br /><br />2.0.2 Disk Space requirements for Windows 98 <br /><br />Because many factors go into calculating the amount of free space required for Windows 98,<br />these figures are only estimates based on typical Windows 98 installs.<br /><br />Typical upgrade from Windows 95: requires approximately 205 MB of free hard disk space, but may require as much as much as 315 MB, depending on your system configuration.<br /><br />Full install of Windows 98 on a FAT16 drive: requires 260 MB of free hard disk space, but may range between 210-400 MB depending on system configuration and options selected.<br /><br />Full install of Windows 98 on a FAT32 drive: requires 210 MB of free hard disk space, but may range between 190-305 MB, depending on system configuration and options selected.<br /><br />Also, if you are installing Windows 98 to a drive other than C, Setup can require up to 25 MB of free disk space on drive C for the system and log files created during Setup.<br /><br />Uninstall: If you wish to back up Windows 95 before upgrading, select the Save Your System Files option during Setup. This will allow you to uninstall Windows 98 Second Edition in the event you have problems. However, there are certain cases in which you cannot do this:<br /><br />* Your current Windows installation is on a compressed drive.<br />* You are installing to a new directory or setting up a clean install with no previous version available.<br />* You are running a version of MS-DOS earlier than 5.0.<br /><br />MAKE A NEW STARTUP DISK! <br /><br />Because of changes in the real-mode and protect-mode kernels to support FAT32, Windows 98 Second Edition startup disks are not compatible with earlier versions of Windows. Therefore, when you set up Windows 98 Second Edition for the first time, be sure to make a new Startup Disk, EVEN IF YOU ARE NOT PLANNING TO USE FAT32.<br /><br />2.0.3 Program Manager from Windows 3.x.<br /><br />Program Manager is no longer supported in Windows 98. <br /><br />Program Manager (Progman.exe) is left on the system for troubleshooting purposes, but it will NOT contain any groups. In addition, if you are upgrading over Windows 95, your existing .grp files will be removed. <br /><br />These are known to cause some problems when installing Windows 98. If you are upgrading from Windows 3.x, the old .grp files will remain on the system and Program Manager will still have some functionality. <br /><br />You should back up your existing progman.ini and *.grp files before upgrading to Windows 98 if you intend to use Program Manager.<br /><br /><br />3. INSTALLING WINDOWS 98 SECOND EDITION FROM MS-DOS<br /><br />If you are starting with a clean or new hard disk or if you have problems running Setup from your previous version of Windows, you may have to run Windows 98 Second Edition Setup from MS-DOS. Although installing from MS-DOS is typically the slower method of installation, it is often the safest and should be used when other types of installations fail.<br /><br />3.0.1 MS-DOS Boot Hot Keys<br /><br />There are several ways to boot your system to an MS-DOS command prompt safely. The easiest way is by using these hot keys:<br /><br />* Windows 98 Second Edition<br />Hold the CTRL key down while your computer is booting. <br />This will take you directly to the Windows 98 Boot Menu (the F8 key is still functional, but there is no "Starting Windows 98" prompt in Windows 98, so it's hard to know exactly when to press it).<br /><br />* Windows 95<br />Press the F8 key at the "Starting Windows 95" prompt. <br /><br />This will take you to the Windows 95 Boot Menu.<br /><br />* MS-DOS 6.x<br />Press the F8 key at the "Starting MS-DOS" prompt. <br /><br />This will allows you to manually choose which drivers to load or to bypass your system files. <br /><br />* Real-mode CD-ROM drivers<br />You will need real-mode CD-ROM drivers loaded so you can access the Windows 98 Second Edition CD. If you have run Windows 98 Setup before and have created a Startup Disk, you can use the CD-ROM drivers included on that disk. If you do not have a Startup Disk, you will need to run the installation program that came with your CD-ROM hardware.<br /><br />After you have access to your CD-ROM drive, you can switch to the drive containing the Windows 98 Second Edition CD and type: SETUP. Setup should now continue.<br /><br />3.0.2 Editing your Config.sys and Autoexec.bat files<br /><br />Your computer's Config.sys and Autoexec.bat files tell your computer what programs and devices to load on startup (for example, a virus-scanner program to Autoexec.bat file may direct your computer to automatically load). Windows 98 Second Edition Setup will not run properly with some programs and devices. <br /><br />To remove or disable such a program or device, you may need to edit the Config.sys and/or Autoexec.bat files.<br /><br />To edit the Config.sys and Autoexec.bat files:<br /><br />1. In Windows 3.1 or 3.11, click File, click Run, type Sysedit, and then press ENTER. In Windows 95, click Start, click Run, type Sysedit, and then press ENTER.<br /><br />2. In the Config.sys or Autoexec.bat dialog box, type REM at the beginning of any line(s) that you want to disable.<br /><br />3. Save changes and restart your computer.<br /><br /><br />3.0.3 Tips for Installing Real-Mode CD-ROM Drivers<br /><br />Currently running Windows 95:<br />If you are currently running Windows 95, you may already have a portion of the CD-ROM drivers loaded. If you can shut down to MS-DOS mode and get access to your CD-ROM drive, try the following:<br /><br />* Reboot and press the F8 key at "Starting Windows 95".<br />* Choose "Command Prompt Only."<br />* At the C:\ prompt type: DosStart.bat.<br /><br />You should now have access to your CD-ROM drive.<br /><br />Lost access to the CD-ROM drive during Setup:<br />If you lose access to your CD-ROM during Windows 98 Second Edition Setup, you can try the following:<br /><br />* Reboot and press the F8 key at "Starting Windows 95," and then choose the option for Command Prompt Only. If you are running MS-DOS, boot directly to command prompt.<br />* Edit the Autoexec.bat file by typing: Edit Autoexec.bat<br />* Delete the text "Rem by Windows 98 Setup" in front of the line that includes the reference to Mscdex.exe.<br />* Exit Edit by typing ALT-F-X and save the file when prompted.<br />* Reboot. Either Setup should continue on its own, or you should run Setup again, choosing Safe Recovery if prompted.<br /><br />3.0.4 Installing Windows 98 Second Edition from Your Hard Disk<br /><br />By copying all the Setup files to your hard disk and then installing from your hard disk, you can eliminate most of the problems associated with file copy and disk I/O issues. You can unload your CD-ROM drivers and free up conventional memory to assist with low memory errors in this type of install. To copy the Setup files locally:<br /><br />From Windows 95:<br />* Free an additional 120 MB of disk space in addition to what Setup will require. Setup will typically require 195 MB for an upgrade from Windows 95.<br /><br />* Create a temporary folder called "W98Flat" to store the Setup files on that drive.<br />* Copy the contents of the Win98 folder on your Windows 98 Second Edition CD to the temporary folder you just created. You should also copy the Win98 subfolders, but this is not essential if you are short on disk space.<br />* Reboot. Press the F8 key at "Starting Windows 95" and choose Safe Mode Command Prompt Only.<br />* Now, switch to the temporary folder containing the Windows 98 Second Edition Setup files and type: SETUP.<br /><br />From MS-DOS:<br />* Make sure you have access to your CD-ROM drive. See above for more information.<br />* Free an additional 120 MB of disk space in addition to what Setup will require. Setup will typically require 195 MB for an upgrade from Windows 95.<br />* Create a temporary folder called "W98Flat" on the drive with plenty of free space to store the Setup files. To create a temporary directory, switch to that drive letter and type: MD W98Flat.<br />* Now, switch to the Windows 98 Second Edition CD-ROM drive and to the Win98 directory.<br />* Then copy the Windows 98 Second Edition Setup files to the temporary directory you just created by typing: <br /><br />Copy *.* <drive letter>\W98Flat.<br />* After all the files are copied, restart your system and perform a clean boot by bypassing your startup files. See "Performing a Clean Boot" for more information.<br />* Switch to the temporary directory you just copied the files to and start Setup by typing: SETUP.<br /><br /><br />4. PERFORMING A CLEAN BOOT<br /><br />Third-party device drivers, utilities, or other programs can prevent a successful install. Clean-booting your system can fix many of these problems. You can perform a clean boot by:<br /><br />Using a floppy disk to start your computer:<br />* Boot from a Windows 98 Second Edition Startup Disk. This disk allows the option for loading with or without CD-ROM drivers and is a clean environment for running Setup.<br /><br />* Boot from a previous Windows 95 or MS-DOS boot disk. This does not give access to your CD-ROM drivers, but can be used if you copy the Setup files to your hard disk as described above.<br /><br />Windows 95 Safe Mode Command Prompt Only:<br />* Boot your system and hold the F8 key at the "Starting Windows 95" prompt.<br />* Choose Safe Mode Command Prompt Only. This also does not provide access to your CD-ROM drive, but can be used if the Setup files are copied to your hard disk as described above.<br /><br />Windows 98 Second Edition step-by-step boot:<br />If you want to load some drivers manually, do this:<br />* Boot your system and hold the F8 key at the "Starting Windows 95" prompt.<br />* Choose the Step by Step option.<br />* Now, only say YES to devices you want to be loaded. In most cases, you should say YES to Himem.sys.<br /><br />Windows 95/MS-DOS Clean boot with more memory:<br /><br />You can increase the amount of memory available by making the following modifications to your Config.sys file. You can also make these changes to your Boot Disk as well. <br /><br />NOTE: These are the only drivers you should load.<br /><br />Device=Himem.sys<br />Device=EMM386.exe noems<br />Dos=high,umb<br />Device=drvspace.sys /move <br /> (Optional - only if using DriveSpace compression)<br /><br />Creating a Windows 98 Second Edition Startup Disk:<br /><br />If Windows 98 Second Edition Setup fails after copying most of the files to your hard disk, you may be able to create a Startup Disk by using the bootdisk.bat utility.<br />* Boot to an MS-DOS prompt.<br />* Change directories to your Windows\command directory.<br />* Run the Bootdisk.bat program that will prompt you to create a Startup Disk.<br /><br />This disk contains generic real-mode CD-ROM drivers that may be useful when running Setup again.<br /><br /><br />5. ANTIVIRUS SOFTWARE<br /><br />Make sure that no antivirus program is running while you are setting up Windows 98 Second Edition. If the program is a terminate-and-stay-resident program, remove any references to it in your Autoexec.bat, Config.sys, and Win.ini files.<br /><br />If your BIOS has built-in virus protection, disable it before running Setup. To disable it, you must use the CMOS setup program for your BIOS. For more information, see your computer documentation.<br /><br />See the notes for specific antivirus programs below.<br /><br />CMOS/BIOS-enabled virus protection:<br />Some systems come with virus protection built into the system. If this is left enabled, you may be warned with "Virus-like Activity" or "Master Boot Record Changed" messages. You must allow these changes to take place. If you choose to restore the previous settings, your system <br />may no longer boot.<br /><br />Norton AntiVirus:<br />If Norton AntiVirus is installed, you may see the following warning at the end of the initial file copy <br />procedure: <br /><br />Application Wininst0.400\Suwin.exe is attempting to update the Master Boot Record You should choose Continue (C) for Setup to finish properly. If you do not allow these changes to take place, Setup may stop responding.<br /><br />Dr. Solomon's AntiVirus:<br />If you are running Dr. Solomon's AntiVirus utility, you may receive a blue screen fatal exception error in Ios.vxd while trying to create a Startup Disk during Setup. You should click Cancel on the Startup Disk screen when the progress bar is at 20%. This will allow Setup to continue. <br />Look for an update to Dr. Solomon's AntiVirus software on their Web site to resolve this issue.<br /><br />6. FINDING HARD DISK PROBLEMS DURING SETUP USING SCANDISK <br /><br />The version of Scandisk run during Windows 98 Second Edition Setup only checks for errors. It does not fix them. If problems exist, Setup cannot continue until they are fixed. To fix these problems, quit Setup and run ScanDisk from Windows 95 or MS-DOS. See below for more information about using ScanDisk to resolve these issues.<br /><br />Fixing Hard Disk Problems:<br />If, during Setup, you see a message telling you that you must run ScanDisk to fix problems on your hard disk, follow these steps to fix the problems.<br /><br />If you are setting up Windows 98 Second Edition over MS-DOS or a previous version of Windows, such as <br /><br />Windows 3.1:<br /><br />1. Quit Windows.<br /><br />2. If you are setting up from floppy disks, insert Setup Disk 1 into the floppy drive, and then type <br /> the following at the command prompt:<br /><br /> a:scandisk.exe /all<br /><br /> where "a" is the drive that contains the Windows disk.<br /><br />3. If you are setting up from a CD, insert the CD, and then type the following:<br /><br /> d:\win98\scandisk.exe /all<br /><br /> where "d" is the drive that contains the CD.<br /><br />4. Follow the instructions on your screen, and fix any problems that ScanDisk finds.<br /><br />5. Start Windows, and then run Setup again.<br /><br />If you are setting up Windows 98 Second Edition over a previous version of Windows 98 or Windows 95:<br /><br />1. Quit Setup.<br /><br />2. On the Start menu, point to Programs, point to Accessories, point to System Tools, and then <br /> click ScanDisk.<br /><br />3. Check your hard disks and any host drives you have for errors, and repair any problems found. Be sure to do a complete surface scan on all your drives, or Setup may still find errors.<br /><br />Problems Running ScanDisk:<br /><br />There are certain cases where ScanDisk may not be able to fix an issue or is producing errors. <br /><br />You are running DriveSpace 3 compression:<br /><br />If Drvspace3 compression is installed on your system, you may be low on conventional memory. To free up memory, you can try the following:<br /><br />* If you are running MS-DOS 6.x, you can run Memmaker.exe to free enough memory for ScanDisk to complete. <br />* See INSTALLING WINDOWS 98 SECOND EDITION FROM MS-DOS for information on how to perform a clean boot with more memory.<br />* Check your drives while running Windows 95.<br /><br />If you still don't have enough memory, or if you have other problems while Setup is running ScanDisk, you can bypass ScanDisk in Setup by running Setup with the /IS option. To do this, type the following command:<br /><br /> setup /is<br /><br />NOTE: Bypassing ScanDisk during Setup is not recommended. If you do, there may be problems with your hard disk that could cause Windows 98 Second Edition not to install or run correctly. <br /><br /><br />7. CAB FILE ERRORS DURING WINDOWS 98 SETUP.<br /><br />When you try to install Windows 98, or install a component that requires copying files from the original Windows disks or CD-ROM, you may receive one of the following messages:<br /><br />Setup has detected the following decoding error: "Could not decode this setup(.CAB) file. Setup will attempt to recover from this situation, click OK to continue".<br /><br />- "Setup cannot copy all of the files from your Windows 98 CD. Clean the Windows 98 CD with a soft cloth, return it to the CD-ROM drive, and then click OK. <br /><br />This behavior can occur for any of the following reasons:<br /><br /> - Your Windows 98 CD-ROM may be damaged, dirty from smudges or fingerprints, or may contain scratches.<br /><br /> - Your CD-ROM drive is not functioning properly. The CD-ROM may vibrate too much for the laser to accurately read the data.<br /><br /> - Your computer is over-clocked. Extracting files from the Windows 98 cabinet files is memory intensive. If your computer is over-clocked beyond the default settings, it can contribute to decoding errors. Computers that are not over-clocked but are having a cooling problem can also experience decoding errors.<br /><br /> - Your computer has bad or mismatched RAM or cache. For example, you are using EDO and non-EDO RAM, or you are using different RAM speeds. Even if Windows seems to be running without problems, the additional stress of extracting files and accessing the disk may contribute <br /> to decoding errors.<br /><br /> - Your computer has Bus Mastering or Ultra DMA enabled in the BIOS and in Device Manager. The data may be moving too quickly for the system to keep up.<br /><br /> - You are using a third-party memory manager.<br /><br /> - There is a virus on your computer.<br /><br />To resolve this error message, follow these steps. <br /><br />1. Remove the CD-ROM from the CD-ROM drive, rotate it one-quarter to one-half a turn, reinsert the CD-ROM into the drive, and then click OK.<br /><br />2. Remove the CD-ROM from the CD-ROM drive. Clean it with a soft cloth and reinsert the CD-ROM into the drive, and then click OK.<br /><br />3. Try using real-mode CD-ROM drivers. If you are unable to locate the real-mode CD-ROM drivers for your CD-ROM drive, try using the CD-ROM drivers on the Windows 98 Startup Disk. The Windows 98 Startup disk provides support for most types of CD-ROM drives, including<br /> (IDE)and (SCSI) CD-ROM drives. Run Windows Setup from MS-DOS.<br /><br />4. Create an empty folder on one of your hard drives called "W98flat". Copy the contents of the Win98 folder on the CD-ROM to the "W98Flat" folder you just created. If you are unable to copy the contents of the Win98 folder on the CD-ROM to you hard disk, the CD-ROM may be damaged.<br /><br />5. Check your computer for a virus using virus-detection software.<br /><br />6. Run Windows 98 Setup using the following command:<br /><br /> " setup /c " (without the quotation marks) This switch bypasses running SMARTDrive. This makes Setup run slower, but it should be more reliable environment to run in.<br /><br />9. If you are still receiving CAB ERRORS in Windows 98, you can manually extract all the <br />Windows 98 files from the Windows 98 cabinet files on the CD-ROM to your hard disk, and then run Windows 98 Setup from your hard disk. It requires approximately 300 MB of free hard disk space to extract the Windows 98 files. You can use the Ext.exe utility to extract the Windows 98 files.This utility is located on the Windows 98 startup disk and in the \Oldmsdos folder on the Windows 98 CD-ROM. To manually extract the Windows 98 files, follow these steps:<br /><br /> a. Insert your Windows 98 Startup disk in the floppy disk drive, and then restart your computer.<br /><br /> NOTE: If you do not have a Windows 98 Startup disk,see the section "Tips for Installing Real-Mode CD-ROM Drivers" under Running Windows 98 Setup from MS-DOS.<br /> <br /> b. At the command prompt, type "ext" (without the quotation marks).<br /><br /> c. When you are prompted for the location of the cabinet files, type the path to the W98Flat folder that you created in step 4 above.<br /><br /> d. When you are prompted for the files to extract, type *.*<br /><br /> e. When you are prompted for the location to which the files are to be extracted, type in the path to the W98Flat folder you created earlier. <br /><br /> *Note* this does not extract the files in the Precopy1.cab and Precopy2.cab cabinet files.<br /><br /> f. After all the files have been extracted, run Setup from the W98Flat folder on your hard disk.<br /><br />5. Finally, if all the above steps are still failing, you can try to slow down your computer. To slow down your computer, use any or all of the following methods:<br /><br /> - Change your computer's CMOS settings. Bus mastering, external/internal cache, RAM settings/timings, and other settings contribute to the speed at which your computer runs. For information about how to change these settings, consult the documentation that is included with your computer.<br /><br /><br /><br />8. REMOVING WINDOWS 98 SECOND EDITION FROM YOUR SYSTEM<br /><br />Saving System Files:<br />Windows 98 Second Edition Setup offers users the option of backing up their previous version of Windows in case Windows 98 Second Edition needs to be uninstalled later. To enable this option, you must select the Save Your System Files option when prompted during Setup. Setup will then create the following hidden files necessary to uninstall Windows 98:<br /> * Winundo.dat<br /> * Winundo.ini<br /> * Winlfn.ini<br /><br />NOTE: Deleting these files will prevent Windows 98 Second Edition from being uninstalled.<br /><br />If any of the following apply, you will not be able to uninstall Windows 98 Second Edition, and Setup will not prompt you to Save System Files:<br /><br />* Your current Windows installation is on a compressed drive.<br />* You are installing to a new directory or a clean install with no previous version available.<br />* You are running a version of MS-DOS earlier than 5.0.<br /><br />NOTE: The files necessary to remove Windows 98 Second Edition must be saved on a local hard drive. You cannot save them to a network drive or a floppy disk. As long as two or more drives have adequate free space, you can select the drive to which to save the uninstall information.<br /><br />There are also several actions that could prevent Windows 98 from being uninstalled after Setup is complete. The following is a list of items that will cause the uninstall information to be removed from your system:<br /><br />* Converting your hard disk to FAT32<br />* Compressing your hard disk with DriveSpace<br /><br />NOTE: These utilities should warn you that the Uninstall information will be lost before they perform conversion or compression.<br /><br />Removing Windows 98 Second Edition:<br />To remove Windows 98 Second Edition and completely restore your system to its previous versions of MS-DOS and Windows 3.x, or Windows 95:<br /><br />1. Click Start, point to Settings, and then click Control Panel.<br />2. Double-click Add/Remove Programs.<br />3. On the Install/Uninstall tab, click Uninstall Windows 98, and then click Add/Remove.<br /><br />Or if you are having problems starting Windows 98, use your Startup Disk to start your computer, and then run UNINSTAL from the Startup Disk.<br /><br />NOTE: UNINSTAL needs to shut down Windows 98. If there is a problem with this on your computer, restart your computer and press F8 when you see the message "Starting Windows 98." Then, click Command Prompt Only and run UNINSTAL from the command prompt.<br /><br />If Setup did not complete successfully and you want to restore your previous versions of MS-DOS and Windows 3.x, or Windows 95, you can run UNINSTAL from the \Windows\Command directory on your hard disk, or from your Startup Disk.<br /><br />If you saved your files on a drive other than C, you can use the /w option to specify the drive where the files are located. <br /><br />For example:<br /><br /> uninstal /w e:<br /><br />where e: is the drive containing your previous system files.<br /><br />If Windows 98 is running and you want to remove the uninstall files to free disk space, follow these steps:<br /><br />1. Click Start, point to Settings, and then click Control Panel.<br />2. Double-click Add/Remove Programs.<br />3. On the Install/Uninstall tab, click Old <br /> Windows 3.x/MS-DOS System Files, and then click Remove. Or click Remove Windows 95 system files (Uninstall Info).<br /><br />You can no longer remove Windows 98.<br /><br /><br />9. POTENTIAL ISSUES IF YOU HAVE A COMPRESSED DRIVE.<br /><br />If you have compressed your hard disk, you may get a message that there is not enough space on the host partition of the compressed drive. Setup may have to copy some files to your startup drive, the host for your startup drive, or the host for your Windows drive. If you get this message, free some space on the specified drive, and then run Setup again. Try one of the following:<br /><br />* Set up Windows on an uncompressed drive if possible.<br /><br />* Delete any unneeded files on your host partition.<br /><br />* If you are running Windows 3.1 and have a permanent swap file, try making it smaller. In Control Panel, double-click 386 Enhanced, and then click Virtual Memory. Modify the size of your swap file.<br /><br />* Use your disk compression software to free up some space on the host drive for the compressed drive. If you compressed your drive by using DriveSpace or DoubleSpace, <br /><br />Follow these steps:<br /><br /> 1. Quit Windows.<br /> 2. Run Drvspace.exe or Dblspace.exe.<br /> 3. Select the compressed drive on whose host you want to free space.<br /> 4. On the Drive menu, click Change Size and adjust the free space as necessary.<br /><br /> If you compressed your drive using Windows 95 Drivespace, or Drvspace3 from Plus!, <br />Follow these steps:<br /><br /> 1. Start Windows<br /> 2. Select Drivespace from Start/Programs/Accessories/System Tools/Drivespace.<br /> 3. Select the compressed drive on whose host you want to free space.<br /> 4. On the Drive menu, click Change Size, and then adjust the free space as necessary.<br /><br />If you used other compression software, such as Stacker, consult the software documentation.<br /><br />NOTE: You may notice a discrepancy in the amount of free space reported by Setup and the amount of space you think is available on your host drive. Windows uses some space <br />for creating a swap file. This space may not appear to be allocated when Windows is not running. <br /><br />NOTE: If you create a Startup Disk during Setup, make sure you do not use a compressed disk for the Startup Disk.<br /><br /><br />10. INSTALLING WINDOWS 98 SECOND EDITION WITH WINDOWS NT.<br /><br />You cannot install Windows 98 Second Edition over any version of Windows NT, but they can exist together on a single system. However, for compatibility reasons, it is recommended that you install each to a separate hard disk or partition. If Windows NT is already installed, Windows 98 Setup will add itself to the Windows NT boot menu to allow the user to multi-boot between Windows 98 and Windows NT.<br /><br />If you can no longer boot Windows NT, you should boot from the Windows NT recovery disks and choose the Repair option to restore the Windows NT boot files.<br /><br />When installing Windows 98 on a system with drives created with Windows NT, you may receive the following error:<br /><br />"Setup has detected that your hard disk has a 64K-cluster FAT partition. Because ScanDisk does not work on disks with this cluster size, Setup cannot continue. To complete Setup You must repartition your hard disk, format the partition with a FAT file system that has a cluster size of 32K or less, and then restart Setup."<br /><br />Running Setup with the "/is" parameter (e.g., Setup /is) will bypass ScanDisk and avoid this problem.<br /><br />10.0.1 Setting up a dual-boot scenario with Windows NT<br /><br />To set up a dual-boot configuration on an x86 computer, install the operating system in the usual way, and then edit the Boot.ini file as described below. All system startup info is stored in the Boot.ini file, which is automatically created during Setup at the root of your computer's hard disk.<br /><br />>>>To edit the Boot.ini file:<br /><br />1. In Windows Explorer, click View, click Options, and then click "Show all files."<br /><br />2. Make sure "Hide file extensions for known file types" is not checked, and then click OK.<br /><br />3. Right-click the Boot.ini file, and then click Properties.<br /><br />4. Click to clear the Read-only check box, and then click OK.<br /><br />5. Right-click the Boot.ini file, click Copy, right-click a blank area of the Explorer dialog box, and then click Paste. A backup copy with the file name "Copy of Boot.ini" will be created.<br /><br />6. Double-click the Boot.ini file.<br /><br />7. Add the name and location of the alternate system in the [operating systems] section of the file, as in the following example:<br /><br /> [operating systems]<br /> C:\Winnt="Windows NT 4.0"<br /> C:\="Microsoft Windows"<br /><br />8. Save and close the Boot.ini file.<br /><br />9. Right-click the Boot.ini file, and then click Properties.<br /><br />10. Select the Read-only check box, and then click OK.<br /><br /><br />11. SETUP ERROR MESSAGES<br /><br />This section lists specific messages that you may encounter during Setup and provides information about what to do next.<br /><br />Message SU0018<br />"Setup cannot create files on your startup drive and cannot set up Windows 98. There may be too many files in the root directory of your startup drive, or your startup drive letter may have been remapped."<br /><br />The root folder of a drive holds a maximum of 512 entries (files or folders). This message indicates that Setup has detected too many directory entries in the root folder of your computer, and Setup cannot create the files it needs to set up Windows 98. Move or delete some files from the root folder of your drive, and then run Setup again.<br /><br />"Unrecoverable Setup Error" Message<br />"Unrecoverable Setup Error. Setup cannot continue on this system configuration. Click OK to quit Setup." This error could be caused by various conditions. See "General Setup Notes" and INSTALLING WINDOWS 98 SECOND EDITION FROM MS-DOS<br /><br />Long File Names Error Messages<br />If you see the message "Setup has detected that the program, Long File Names, is installed in this directory. Setup cannot continue." quit Setup, and then remove Long File Names from your computer by using the Uninstall feature in Long File Names. See "View Software" for more information.<br /><br />Not Enough Memory Messages<br />If you encounter an Out of Memory message, you can increase conventional memory by commenting out TSRs and loading device drivers into the upper memory area. <br /><br />Not Enough Disk Space Messages<br />You can recover disk space by completing any or all of the following steps:<br /><br />* Right-click Recycle Bin, and then click Empty Recycle Bin.<br /><br />* Delete the contents of your Internet browser cache folder.<br /><br />* Delete files with the extensions .bak and .tmp.<br /><br />* Delete unused program folders (be sure to back up data first).<br /><br />* Delete the old MS-DOS folder, unless you intend to configure your computer to run both Windows 98 and MS-DOS. (First, be sure you have a start disk that supports access to the CD-ROM drive.)<br /><br />* Delete the hidden file Winundo.dat from the previous installation of Windows 95.<br /><br />* Delete the old Windows 3.1 folder, unless you intend to configure your computer to run both Windows 3.1 and Windows 98.<br />Setup Cannot Write to the Temporary Directory This message may appear because there is insufficient disk space for the temporary directory. If space is available on another drive, use the following command line to change the temporary directory location:<br /><br /> Setup /T:<drive letter>:\TEMP<br /><br />If you do not have space available on another drive, free some disk space, and then run Setup again. See the "Not Enough Disk Space" Messages section for files that can be deleted.<br /><br />If you have Multimedia Cloaking and are installing Windows 98 from floppy disks, Setup may not run successfully. If you see messages about Setup not being able to read .cab files, follow these steps:<br /><br />1. Remove the line referencing Cacheclk.exe from your Config.sys and Autoexec.bat files.<br />2. Restart your computer.<br />3. Run Setup again.<br /><br />Message SU0010, SU0012, SU0015, or SU0016<br />If you receive one of these messages during Setup, see "INSTALLING WINDOWS 98 SECOND EDITION ON A SYSTEM RUNNING WINDOWS NT and INSTALLING WINDOWS 98<br />SECOND EDITION ON A SYSTEM RUNNING OS/2 for more information.<br /><br />Message SU0011<br />If your hard disk is password-protected, Setup will not complete successfully. You must first remove the password protection. For more information, see your computer documentation.<br /><br />Message SU0013<br />To set up Windows 98, your startup drive must be an MS-DOS boot partition. If your startup drive is formatted as HPFS or NTFS, you must create an MS-DOS boot partition before running Setup. For more information about creating an MS-DOS boot partition, see your computer documentation.<br /><br />You may also receive this error if you have third-party partitioning software such as EZ drive or Disk Manager installed. If so, reboot your system and run Setup from an MS-DOS command prompt. For more information, see "Running Setup from MS-DOS."<br /><br />Standard Mode Messages<br />If you get any of the following error messages, remove any memory managers (such as EMM386.exe, QEMM, or 386Max) from your Config.sys file, and then run Setup again.<br /><br /> Standard Mode: Invalid DPMI return.<br /> Standard Mode: Fault in MS-DOS Extender.<br /> Standard Mode: Bad Fault in MS-DOS Extender.<br /> Standard Mode: Unknown stack in fault dispatcher.<br /> Standard Mode: Stack Overflow.<br /><br /><br /><br /><br /><br /><br /><br /><br />Microsoft Windows 2000<br /><br /><br /><br /><br /><br /><br /><br />Before You Begin<br /><br /><br />To ensure a successful installation, you should complete the following tasks-which<br />Are described in the sections that follow-before you install Windows 2000:<br /><br />• Make sure your hardware components meet the minimum requirements.<br />• Obtain Windows 2000-compatible hardware and software, such as upgrade packs, new drivers, and so on.<br />• Obtain network information.<br />• Back up your current files before upgrading, in case you need to restore your current operating system.<br />• Determine whether you want to perform an upgrade or install a new copy of Windows.<br />• If you're installing a new copy, identify and plan for any advanced Setup options you might want.<br /><br />Meeting Hardware Requirements<br /><br />Before you install Windows 2000, make sure your computer meets the following minimum hardware requirements:<br /><br />• 133 MHz Pentium or higher microprocessor (or equivalent). Windows 2000 Professional supports up to two processors on a single computer.<br />• 64 megabytes (MB) of RAM recommended minimum. <br />• 32 MB of RAM is the minimum supported. 4 gigabytes (GB) of <br />• RAM is the maximum.<br />• A 2 GB hard disk with 650 MB of free space. <br />• If you're installing over a network, more free hard disk space is required.<br />• VGA or higher resolution monitor.<br />• Keyboard.<br />• Microsoft Mouse or compatible pointing device (optional).<br /><br />For CD-ROM installation:<br /><br />• A CD-ROM or DVD drive.<br />• High-density 3.5-inch disk drive, unless your CD-ROM drive is bootable and supports starting the Setup program from a CD.<br /><br />For network installation:<br /><br />• Windows 2000-compatible network adapter card and related cable (see the Hardware Compatibility List, Hcl.txt, in the Support folder on the Windows 2000 Professional CD).<br />• Access to the network share that contains the Setup files.<br /><br /><br />Checking Hardware and Software Compatibility<br /><br />Windows 2000 Setup automatically checks your hardware and software and reports any potential conflicts. To ensure a successful installation, however, you should determine whether your computer hardware is compatible with Windows 2000 before you start Setup.<br /><br />You can view the Hardware Compatibility List (HCL) by opening the Hcl.txt file in the Support folder on the Windows 2000 Professional CD. If your hardware isn't listed, Setup may not be successful. <br />Important: Windows 2000 supports only those devices listed on the HCL. If your hardware isn't on this list, contact the hardware manufacturer and ask if there's a Windows 2000 driver for the<br />Component. You don’t need to obtain drivers for Plug and Play devices. If you have a program that uses 16-bit drivers, you need to get 32-bit drivers from the software vendor to ensure that the program functions properly after the upgrade.<br /><br />During Setup, you can use upgrade packs to make your existing Windows 95 and Windows 98 software compatible with Windows 2000. Upgrade packs are available from the appropriate software manufacturers.<br /><br /><br />Obtaining Network Information<br /><br />If your computer won't be participating on a network, skip this section.<br /><br />First, you need to decide whether your computer is joining a domain or a workgroup. If you don't know which option to choose or if your computer won't be connected to a network, select the Workgroup option. (If you do, you can join a domain after you install Windows 2000.) If you select the Domain option, ask your network administrator to create a new computer account in that domain or to reset your existing account.<br /><br />If your computer is currently connected to a network, you should get the following information from your network administrator before you begin Setup:<br /><br />• Name of your computer<br />• Name of the workgroup or domain<br />• TCP/IP address (if your network doesn't have a Dynamic Host Configuration Protocol [DHCP] server)<br /><br />If you want to connect to a network during Setup, you must have the correct hardware installed on your computer and be connected by network cable. <br /><br /><br />Backing Up Your Files<br /><br />If you're upgrading from a previous version of Windows, you should ack up your current files. You can back up files to a disk, a tape drive, or another computer on your network.<br /><br />How you back up your files depends on your current operating system. If you're using Windows 95 or Windows 98, you may need to install the Windows Backup program. If you're using Windows NT 3.51 or Windows NT 4.0, Windows Backup is installed by default. You must have a tape drive installed to use the Backup tool in Windows NT.<br /><br /><br />Upgrading vs. Installing a New Copy<br /><br />After you begin Windows 2000 Setup, one of the first decisions you have to make is whether to upgrade your current operating system or to perform an entirely new installation. During Setup you're asked to choose between upgrading or installing a new copy of Windows (or clean install).<br /><br />During an upgrade, Setup replaces existing Windows files but preserves your existing settings and applications. Some applications may not be compatible with Windows 2000 and therefore may not function properly in Windows 2000 after an upgrade. You can upgrade to Windows<br />2000 Professional from the following operating systems:<br /><br />• Windows 95 (all releases), Windows 98 (all releases)<br />• Windows NT 3.51 Workstation, Windows NT 4.0 Workstation (including service packs)<br /><br />If you choose to install a new copy, Setup installs Windows 2000 in a new folder. If you're currently using a non supported operating system (such as Microsoft Windows 3.1 or OS/2), you must install a new copy. You have to reinstall applications and reset your preferences when you<br />install a new copy.<br /><br />Determining Advanced Setup Needs<br /><br />If you're already using Windows 95, Windows 98, Windows NT 3.51, or Windows NT 4.0 and you choose to install a new copy during Windows 2000 Setup, the Select Special Options screen appears during Setup. From this screen, you can select Accessibility and Language settings. <br /><br />If you want to modify the way Setup installs Windows 2000, you can click Advanced Options, and then perform any of the following tasks:<br /><br />• Change the default location of the Setup files.<br />• Store system files in a folder other than the default folder (Winnt).<br />• Copy the installation files from the CD to the hard disk.<br />• Select the partition on which to install Windows 2000.<br /><br />Unless you're an advanced user, it's recommended that you use the default settings.<br /><br /><br />Running Setup<br /><br /><br />The Setup wizard gathers information, including regional settings [j2], names, and passwords. Setup then copies the appropriate files to your hard disk, checks the hardware, and configures your installation. When the installation is complete, you're ready to log on to Windows 2000. Note that your computer restarts several times during Setup.<br /><br />How you start Setup depends on whether you're upgrading or installing a new copy of Windows. Determine your installation method, go to the appropriate section in this read me series, and then follow the procedures for your Setup scenario.<br /><br /><br />If you’re installing a New Copy (Clean Install)<br /><br />If your computer has a blank hard disk or your current operating system isn't supported, you need to start your computer using one of the following:<br /><br />• The Setup startup disks.<br />• The Windows 2000 Professional CD, if your CD-ROM drive is bootable. Some newer CD-ROM drives can boot from the CD and automatically launch Setup.<br /><br />If you don't have the Setup startup disks, you can create replacement disks.<br /><br />To install a new copy using the Setup startup disks <br />1. With your computer turned off, insert the Windows 2000 Setup startup Disk 1 into your floppy disk drive. <br />2. Start your computer. <br /> <br /> Setup starts automatically. <br /><br /> 3. Follow the instructions that appear.<br /> <br /> The following sections discuss the different installation methods available.<br /><br />To install a new copy using the CD <br />1. Start your computer by running your current operating system and then insert the Windows 2000 Professional CD into your CD-ROM drive. <br />2. If Windows automatically detects the CD, click Install Windows 2000. Setup starts.<br /><br />If Windows doesn't automatically detect the CD, start Setup from the Run command. <br />• In Windows 95, Windows 98, or Windows NT 4.0, click Start, and then click Run. <br />• In Windows NT 3.51 or Windows 3.1, in Program Manager, click File, and then click Run. <br /><br />3. At the prompt, type the path to the setup file. the following command, replacing d with the letter of your CD-ROM drive: <br /> <br /> d:\i386\winnt32.exe <br /><br />If you're using Windows 3.1 or the command prompt, type the following command at the prompt, replacing d with the letter of your CD-ROM drive:<br /> <br /> d:\i386\winnt.exe<br /> <br />4. Press ENTER.<br />5. Follow the instructions that appear.<br /><br />To install a new copy using a network connection <br /><br />1. Using your existing operating system, establish your connection to the shared network folder that contains the Setup files. You can also use an MS-DOS(r) or network installation disk to connect to the network server-if the disk contains network client software.<br /><br />Your network administrator will be able to provide you with this path.<br /><br />2. If your computer is currently running Windows 95, Windows 98, or a previous version of Windows NT, at the command prompt, type the path to the file winnt32.exe. If your computer isn't running one of the above versions of Windows, at the command prompt, type the path to the file winnt.exe. <br />3. Press ENTER. <br />4. Follow the instructions that appear.<br /><br /><br /><br />If you’re Upgrading<br /><br />The upgrade process is simple. The Setup wizard detects and installs the appropriate drivers, or it creates a report on devices that couldn't be upgraded so you can be sure your hardware and software are compatible with Windows 2000.<br /><br />Important: You must uncompress any DriveSpace(r) or DoubleSpace(r) volumes before upgrading to Windows 2000.<br /><br />To upgrade Windows 95, Windows 98, or Windows NT 4.0 from the CD <br />1. Start your computer by running your current operating system and then insert the Windows 2000 Professional CD into your CD-ROM drive. <br />2. If Windows automatically detects the CD and asks if you would like to upgrade your computer to Windows 2000 Professional, click Yes. Otherwise, click Start, and then click Run. At the prompt, type the following command, replacing d with the letter assigned to your CD-ROM drive:<br /><br /> d:\i386\winnt32.exe <br /><br /> 3. Press ENTER. <br /> 4. Follow the instructions that appear.<br /><br />To upgrade Windows NT 3.51 from the CD <br /><br />1. Start your computer by running your current operating system and then insert the Windows 2000 Professional CD into your CD-ROM drive. <br />2. In Program Manager, click File, and then click Run. At the prompt, type the following command, replacing d with the letter of your CD-ROM drive:<br /> <br /> d:\i386\winnt32.exe <br /><br /> 3. Press ENTER. <br /> 4. Follow the instructions that appear.<br /><br />To upgrade from a network connection <br /><br />1. Using your current operating system, establish a connection to the shared network folder that contains the Setup files. If you have an MS-DOS or network installation disk that contains network client software, you can use that disk to connect to the shared folder. <br /><br />Your network administrator will be able to provide you with this path. <br /><br />2. If your computer is currently running Windows 95, Windows 98 or a previous version of Windows NT, at the command prompt, type the path to the file winnt32.exe. <br />3. Press ENTER. <br />4. When you're asked if you would like to upgrade your computer to Windows 2000 Professional, click yes. <br />5. Follow the instructions that appear.<br /><br /><br />Collecting User and Computer Information<br /><br />The Windows 2000 Setup wizard leads you through the process of gathering information about you and your computer. Although much of this installation process is automatic, you may need to provide information or select settings in the following screens, depending on the current configuration of your computer:<br /><br />• Licensing Agreement. If you agree with the terms, select I accept this agreement to continue with Setup.<br />• Select Special Options. Use this screen to customize Windows 2000Setup, language, and accessibility settings for new installations. You can set up Windows 2000 to use multiple languages and regional settings.<br /><br />• Upgrading to the Windows 2000 File System (NTFS). Windows 2000 can automatically convert partitions on your hard disk to NTFS, or you can keep your existing file systems. If you're upgrading, Setup uses your current file system; however, you can change to NTFS, the recommended file system for Windows 2000.<br /><br />• Regional Settings. Change the system and user locale settings for different regions and languages.<br /><br />• Personalize Your Software. Enter the full name of the person and optionally, the organization to which this copy of Windows 2000is licensed.<br /><br />• Computer Name and Administrator Password. Enter a unique computer name that differs from other computer, workgroup, or domain names on your network. Setup suggests a computer name, but you can change the name.<br /><br />Setup automatically creates an Administrator account during the installation. When you use this account, you have full rights over the computer's settings and can create user accounts on the computer. That is, logging on as an Administrator after you install Windows 2000 gives you administrative privileges that you need to log on and manage your computer. Specify a password<br />for the Administrator account. For security reasons you should always assign a password to the Administrator account. Take care to remember and protect your password.<br /><br />• Date and Time Settings. Verify the date and time for your region, select the appropriate time zone, and then select whether you want Windows 2000 to automatically adjust for daylight savings time.<br />• Networking Settings. Unless you're an advanced user, select the Typical settings option for your network configuration. Select the Custom settings option to manually configure network clients, services, and protocols.<br />• Workgroup or Computer Domain. During Setup, you must join either a workgroup or a domain.<br />• Provide Upgrade Packs. Some software manufacturers provide upgrade packs that allow your programs to work with Windows 2000. If you don't have any upgrade packs, simply click Next to continue with Setup.<br />• Network Identification Wizard. If your computer is participating on a network, this wizard prompts you to identify the users who will be using your computer. If you indicate that you're the only user, you're assigned Administrator privileges.<br /><br />Providing Networking Information<br />During or after Setup, you need to join either a workgroup or a domain. If you won't be working on a network, specify that you want to join a workgroup.<br /><br />Joining a Workgroup<br />A workgroup is one or more computers with the same workgroup name (for example, a "peer-to-peer" network). Any user can join a workgroup by simply specifying the workgroup name-you don't need special permissions to join a workgroup. You must provide an existing or new<br />Workgroup name or you can use the workgroup name that Windows 2000 suggests during Setup.<br /><br />Joining a Domain A computer account identifies your computer to the domain, while the user account identifies you to your computer.<br /><br />A domain is a collection of computers defined by a network administrator. Unlike joining a workgroup, which you can do yourself, joining a domain requires permission from the network administrator.<br /><br />Joining a domain during Setup requires a computer account in the domain you want to join. If you're upgrading from Windows NT, Setup uses your existing computer account. Otherwise, you'll be asked to provide a new computer account. Ask your network administrator to create a computer account before you begin Setup. Or if you have the appropriate privileges, you can create the account during Setup and join the domain. To join a domain during Setup, you need to provide your user name and password.<br /><br />Note: If you have difficulty joining a domain during Setup, join a workgroup instead, and then join the domain after you finish installing Windows 2000.<br /><br /><br /><br />Starting Windows 2000<br /><br />After gathering information, the Setup wizard completes the installation. The computer restarts several times, and then the logon prompt for Windows 2000 appears. After you log on, you can register your copy of Windows 2000, create user accounts, and reconfigure any settings that you entered in Setup.<br /><br />Logging on to Windows 2000<br />When your computer restarts after installation, you log on to Windows 2000 for the first time. If you upgraded from a previous version of Windows and already had an existing user account, you can log on using that account and password.<br /><br />If you don't have a user account, you need to log on using the Administrator account and the password you selected during Setup. Then you can create your user account.<br /><br />To log on to Windows 2000 using the Administrator account <br /><br />1. In the Log on to Windows 2000 dialog box, type the Administrator password that you created during Setup. <br />2. Press ENTER.<br /><br />Windows 2000 starts, and the Welcome screen appears. <br /><br />Creating a User Account<br />Your user account identifies your user name and password, the groups you're a member of, which network resources you have access to, and your personal files and settings. Each person who regularly uses the computer should have a user account. The user account is identified by a user name and a password, both of which the user types when logging on to the computer. You can create individual user accounts after logging on to the computer as Administrator.<br /><br />To create your user account <br /><br />1. Click the Start button, point to Settings, and then click Control Panel. <br />2. Double-click Users and Passwords. <br />3. Click Add.<br /><br /> The Add New User wizard appears. <br /><br /> 4. Follow the instructions that appear.<br /><br />After you've added your user account, you're ready to log off as Administrator and log on using your user account. <br /><br />Registering Your Copy of Windows 2000<br />To open the Welcome screen, click Start, click Run, type welcome, and then click OK.<br /><br />If you have a modem, you can register your copy of Windows 2000 by starting the Registration wizard in the Welcome to Windows 2000 dialog box. If you do not have a modem or an Internet connection, use the registration card included in the Windows 2000 package to register.<br /><br />Advanced Setup Options <br /><br />You can set up Microsoft(r) Windows(r) 2000 Professional without using advanced Setup options, The following sections describe how you can create dual-boot configurations, manage disk partitions, install Windows 2000 on multiple computers, or use alternate file systems.<br /><br /><br />Understanding Advanced Setup Options<br /><br />The information in the following sections helps you to make decisions about how you install Windows 2000. Unless you're an advanced user, it's recommended that you use the default settings.<br /><br />File Systems<br />Before you install Windows 2000, you should decide which file system you should use. A file system is the method by which information is stored on a hard disk.<br /><br />Windows 2000 supports the NTFS file system or one of the file allocation table file systems (FAT or FAT32).<br /><br />NTFS<br />The NTFS file system is the recommended file system for use with windows 2000. NTFS has all of the basic capabilities of FAT, and it provides the following advantages over the FAT and FAT 32 file systems:<br /><br />• Better file security.<br />• Better disk compression.<br />• Support for large hard disks, up to 2 terabytes (TB). (The maximum drive size for NTFS is much greater than that for FAT, and as drive size increases, performance with NTFS doesn't degrade as it does with FAT.)<br /><br />If you're using a dual-boot configuration (using both Windows 2000 and another operating system on the same computer), you may not be able to gain access to files on NTFS partitions from the other operating system on your computer. For this reason, you should probably use FAT32 or FAT if you want a dual-boot configuration.<br /><br />FAT and FAT32<br />FAT32 is an enhanced version of the FAT file system that can be used on drives from 512 megabytes (MB) to 2 TB in size. FAT and FAT32 offer compatibility with operating systems other than Windows 2000. If you're setting up a dual-boot configuration, you should probably use<br />FAT or FAT32.<br /><br />If you're dual booting Windows 2000 and another operating system, choose a file system based on the other operating system, using the following criteria:<br /><br />• Format the partition as FAT if the installation partition is smaller than 2 gigabytes (GB), or if you're dual booting Windows 2000 with MS-DOS(r), Windows 3.1, Windows 95, Windows 98, or Windows NT. It's recommended that you use NTFS rather than FAT32 for partitions larger than 32 GB.<br />• Use FAT32 for use on partitions that are 2 GB or larger. If you choose to format using FAT during Windows 2000 Setup and your partition is greater than 2 GB, Setup automatically formats it as FAT32.<br /><br /><br /><br />Disk Partitions<br />Disk partitioning is a way of dividing your hard disk so that each section functions as a separate unit. You can create a partition to organize information, for example, to back up data, or to dual boot with another operating system. When you create partitions on a disk, you divide the disk into one or more areas that can be formatted for use by a file system, such as FAT or NTFS.<br /><br />If you're performing a new installation, Windows 2000 Setup automatically selects an appropriate disk partition-unless you click Advanced Options during Setup and specify otherwise. A hard disk can contain up to four partitions.<br /><br />Configuring Disk Partitions<br />Depending on your existing hard disk configuration, you have the following options during Setup:<br /><br />• If the hard disk is unpartitioned, you can create and size the Windows 2000 partition.<br />• If the existing partition is large enough, you can install Windows 2000 on that partition.<br />• If the existing partition is too small, but you have adequate unpartitioned space, you can create a new Windows 2000 partition in that space.<br />• If the hard disk has an existing partition, you can delete it to create more unpartitioned disk space for the Windows 2000 partition. Deleting an existing partition also erases any data on that partition.<br /><br />If you're setting up a dual-boot configuration of Windows 2000 Professional, it's important to install Windows 2000 on its own partition. Installing Windows 2000 on the same partition as another operating system may cause Setup to overwrite files installed by the other operating system.<br /><br />Sizing Disk Partitions<br />Although Windows 2000 requires a minimum of 500 MB of free disk space for installation, using a large installation partition provides flexibility for adding future updates, operating system tools, or<br />other files.<br /><br />During Setup, you should create and size only the partition on which you want to install Windows 2000. After Windows 2000 is installed, you can use Disk Management to make changes or create new partitions on your hard disk.<br /><br />Converting vs. Reformatting Existing Disk Partitions<br />Before you run Setup, decide whether you want to keep, convert, or reformat an existing partition. The default option for an existing partition is to keep the existing file system intact, thus preserving all files on that partition.<br /><br />If you decide to convert or reformat, you need to select an appropriate file system (NTFS, FAT, or FAT32). The following guidelines should help you decide.<br /><br />Important: Before you change file systems on a partition, you should back up the information on the partition because reformatting the partition deletes the existing data.<br /><br />Should I convert my existing partition to NTFS?<br />You can convert an existing partition to NTFS during Setup to make use of Windows 2000 security. You can also convert file systems from FAT to NTFS at any time after Setup by using Convert.exe from the command prompt. This option preserves your existing files, but only if <br />Windows 2000 has access to files on that partition. Use this option if:<br /><br />• You want to take advantage of NTFS features such as security, disk compression, and so on.<br />• You aren't dual booting with another operating system-other than Windows NT 4.0 Service Pack 4 (SP4) or later, which can use a Windows 2000 NTFS partition.<br /><br /><br />Should I always use NTFS for my file system?<br />NTFS is the recommended file system for Windows 2000. However, there are specific reasons that you might want to use another file system. If you format a partition with NTFS, only Windows 2000 can gain access to files subsequently created on that partition. If you plan to access files from other operating systems (including MS-DOS), you should choose to install a FAT file system.<br /><br />What happens if I reformat my existing partition?<br />Reformatting a partition erases all existing files on that partition. Make sure to back up your files before you reformat a partition.<br /><br />Important: To convert an NTFS partition to FAT, you must first back up all of your files, reformat the partition as FAT (which erases all the files), and then restore the files from backup. You can't restore an NTFS partition created in Windows NT after you convert it to the version of NTFS used in Windows 2000. To convert a FAT partition to FAT32, you must first back up all your files, reformat the partition as FAT32 (which erases all the files), and then restore the files from backup.<br /><br />Dual-Boot Configuration<br />You can also set up a multiboot configuration, with more than two operating systems on one computer.<br /><br />If you use a dual-boot configuration on your computer, you can choose between operating systems (or between versions of the same operating system) every time you start your computer.<br /><br />Windows 2000 supports dual booting with the following operating systems:<br /><br /> * Windows NT 3.51, Windows NT 4.0<br /> * Windows 95, Windows 98<br /> * Windows 3.1, Windows for Workgroups 3.11<br /> * MS-DOS<br /> * OS/2<br /><br />To set up a dual-boot configuration, you must use a separate partition for each operating system. During Windows 2000 Setup, you can use the Advanced Setup option to select a folder on an unused partition.<br /><br />IMPORTANT: It's strongly recommended that you create an Emergency Repair Disk before you install another operating system on your computer.<br /><br />Before You Dual Boot<br />If you want to set up a dual-boot configuration to have Windows 2000 Professional and another operating system, such as MS-DOS or Windows 98, available on your computer, first review the following precautions:<br /><br />• Each operating system should be installed on a separate drive or disk partition.<br />• Because you're performing a new installation of Windows 2000, you need to reinstall any programs-such as word processing or e-mail software-after Setup is complete.<br />• You should use a FAT file system for dual-boot configurations. Although using NTFS in a dual boot is supported, such a configuration introduces additional complexity into the choice of file systems. For more information about using NTFS with a dual-boot configuration, see the Windows 2000 Professional Resource Kit.<br />• To set up a dual-boot configuration between MS-DOS or Windows 95 and Windows 2000, you should install Windows 2000 last. Otherwise, important files needed to start Windows 2000 could be overwritten.<br />• For a dual boot of Windows 2000 with Windows 95 or MS-DOS, the primary partition must be formatted as FAT; for a dual boot with Windows 95 OSR2 or Windows 98, the primary partition must be formatted as FAT or FAT32, not NTFS.<br />• If you're upgrading a dual-boot computer, you can't gain access to NTFS partitions from any operating system other than Windows NT 4.0 with SP4.<br />• If you install Windows 2000 on a computer that dual boots OS/2 and MS-DOS, Windows 2000 Setup configures your system so you can dual boot between Windows 2000 Professional and the operating system (MS-DOS or OS/2) you most recently used before running Windows 2000 Setup.<br />• Don't install Windows 2000 on a compressed drive unless the drive was compressed with the NTFS file system compression utility.<br /><br />It isn't necessary to uncompress DriveSpace(r) or DoubleSpace(r) volumes if you plan to dual boot with Windows 95 or Windows 98; however, the compressed volume won't be available while you're running Windows 2000.<br /><br />• Windows 95 or Windows 98 might reconfigure hardware settings the first time you use them, which can cause problems if you're dual booting with Windows 2000.<br />• If you want your programs to run on both operating systems on a dual-boot computer, you need to install them from within each operating system. You can't share programs across operating systems.<br /><br />Dual Booting with Windows NT<br />If you plan a dual-boot configuration with Windows NT and Windows 2000, first review the following precautions:<br /><br />• If the dual-boot computer is part of a Windows NT or Windows 2000 domain, each installation of Windows NT Workstation or Windows 2000 Professional must have a different computer name.<br />• If your hard disk is formatted with only NTFS partitions, it's not recommended that you dual boot Windows 2000 with Windows NT.<br />• If you're using NTFS and dual booting with Windows NT, you must upgrade to Windows NT 4.0 SP4 or later before continuing with the Windows 2000 installation.<br /><br /><br /><br /><br /><br /><br /><br /><br /><br />Microsoft Windows 2003<br /><br /><br /><br />Contents<br /><br />1.0 Preparing Your System for an Upgrade<br /><br />2.0 Starting Setup for an Upgrade<br /><br />3.0 Preparing Your System for a New Installation<br /><br />4.0 Starting Setup for a New Installation<br /><br />5.0 Planning for Unattended Setup<br /><br />6.0 Entering Server Settings for a New Installation<br /><br />7.0 Configuring Your Server<br /><br />8.0 Product Activation for Products in the Windows Server 2003 Family <br /><br /><br /><br /><br /><br /><br />1.0 PREPARING YOUR SYSTEM FOR AN UPGRADE<br /><br /><br />This section describes the basic steps to take in preparing your server for an upgrade.<br /><br />1.1 Checking the System Log for Errors<br /><br />Use Event Viewer to review the system log for recent or recurring errors that could cause problems during the upgrade. For information about viewing errors, see Help for the operating system that you are running.<br /><br />1.2 Backing up Files<br /><br />Before upgrading, it is recommended that you back up your current files, including anything containing configuration information, for example, the System State and the system and boot partitions. You can back up files to a variety of different media, such as a tape drive or the hard disk of another computer on the network.<br /><br />1.3 Preparing Mirror Sets and Other Disk Sets for an Upgrade (Windows NT 4.0 only)<br /><br />With the disk management technologies in Microsoft Windows NT 4.0, you could create volume sets, mirror sets, stripe sets, or stripe sets with parity, each with specific capabilities and limitations. By using dynamic disks, introduced with Microsoft Windows 2000, you can take advantage of similar technologies, and with Windows Server 2003, Standard Edition, you can also extend dynamic volumes without repartitioning or reformatting.<br /><br />1.4 Disconnecting UPS Devices<br /><br />If you have an uninterruptible power supply (UPS) connected to your target computer, disconnect the connecting serial cable before running Setup. Setup automatically attempts to detect devices connected to serial ports, and UPS equipment can cause problems with the detection process.<br /><br />1.5 Reviewing Hardware and Software<br /><br />When you start Setup for an upgrade, the first process it carries out is a check for compatible hardware and software on your computer. Setup displays a report before continuing. Use this report, along with information in Relnotes.htm (in the \Docs folder on the Setup CD), to find out whether you need to update your hardware, drivers, or software before upgrading.<br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br />2.0 STARTING SETUP FOR AN UPGRADE<br /><br />If you are upgrading to Windows Server 2003, Standard Edition, you can start Setup from the CD or from a network. <br /><br />IMPORTANT: To run Setup for an upgrade, you must be a member of the Administrators group on the local computer. If the computer is joined to a domain, members of the Domain Admins group might be able to perform this procedure. As a security best practice, if you start Setup from a computer running Windows 2000, consider using Run as. Run as is a secondary logon method that you can use to start commands or programs using a different security context? For example, you can log on as a member of the Users group and, without logging off, run a command as a member of the Administrators group. <br /><br /><br />>>>TO START AN UPGRADE FROM THE CD ON A COMPUTER RUNNING WINDOWS<br /><br /> 1. Insert the CD in the drive, and wait for Setup to display a dialog box.<br /><br /> 2. Follow the Setup instructions.<br /><br />>>>TO START AN UPGRADE FROM A NETWORK<br /><br /> 1. On a network server, share the installation files by either inserting the CD or sharing the CD-ROM drive or by copying the files from the I386 folder on the CD to a shared folder.<br /><br /> 2. On the computer on which you want to install Windows Server 2003, Standard Edition, connect to the shared folder or drive that contains the Setup files.<br /><br /> 3. Run Setup.exe.<br /><br /> 4. Follow the Setup instructions.<br /><br /><br />3.0 PREPARING YOUR SYSTEM FOR A NEW INSTALLATION<br /><br />This section describes the basic steps to take in preparing your server for a new installation.<br /><br /><br />3.1 Checking the System Log for Errors<br /><br />If the computer already has a working operating system, review the system log for recent or recurring errors (especially hardware errors) that could cause problems during the installation. For information about viewing event logs, see Help for the operating system on your computer.<br /><br />3.2 Backing up Files<br /><br />Before you perform a new installation, it is recommended that you back up your current files, unless the computer has no files or the current operating system files have been damaged. You can back up files to a variety of different media, such as a tape drive or the hard disk of another computer on the network.<br /><br />3.3 Uncompressing the Drive<br /><br />Uncompress any DriveSpace or DoubleSpace volumes before installing. Do not install Windows Server 2003, Standard Edition, on a compressed drive unless the drive was compressed with the NTFS file system compression feature.<br /><br />3.4 Preparing Mirror Sets and Other Disk Sets (Windows NT 4.0 only)<br /><br />With the disk management technologies in Windows NT 4.0, you could create volume sets, mirror sets, stripe sets, or stripe sets with parity, each with specific capabilities and limitations. By using dynamic disks, introduced with Windows 2000, you can take advantage of similar technologies, and with Windows Server 2003, Standard Edition, you can also extend dynamic volumes without repartitioning or reformatting.<br /><br />This transition from the technologies used in Windows NT 4.0 means that you must make certain choices before running Setup for Windows Server 2003, Standard Edition. <br /><br />3.5 Disconnecting UPS Devices<br /><br />If you have an uninterruptible power supply (UPS) connected to your target computer, disconnect the connecting serial cable before running Setup. Setup automatically attempts to detect devices connected to serial ports, and UPS equipment can cause problems with the detection process.<br /><br /><br />4.0 STARTING SETUP FOR A NEW INSTALLATION<br /><br />This section explains how to start Setup for a new installation.<br /><br />Setup works in several stages, prompting you for information, copying files, and restarting. Setup concludes with the Manage Your Server program, which you can use to adjust the server configuration for your specific needs.<br /><br />4.1 Providing a Mass Storage Driver or a HAL File<br /><br />If you have a mass storage controller that requires a driver supplied by the manufacturer, or if you have a custom Hardware Abstraction Layer (HAL) file supplied by the manufacturer, provides the appropriate driver file or HAL file during Setup.<br /><br /><br />4.1.1 Mass Storage Drivers and the Setup Process<br /><br />If you have a mass storage controller (such as a SCSI, RAID, or Fibre Channel adapter) for your hard disk, confirm that the controller is designed for products in the Windows Server 2003 family by checking the hardware and software compatibility information in the Windows Catalog at:<br /><br /> http://www.microsoft.com/windows/catalog/<br /><br />If your controller is compatible, but you are aware that the manufacturer has supplied a separate driver file for use with products in the Windows Server 2003 family, obtain the file (on a floppy disk) before you begin Setup. During the early part of Setup, a line at the bottom of the screen prompts you to press F6. Further prompts will guide you in supplying the driver file to Setup so that it can gain access to the mass storage controller.<br /><br />If you are not sure whether you must obtain a separate driver file from the manufacturer of your mass storage controller, you can try running Setup. If the controller is not supported by the driver files on the Setup CD and therefore requires a driver file that is supplied by the hardware manufacturer, Setup stops and displays a message saying that no disk devices can be found, or it displays an incomplete list of controllers. After you obtain the necessary driver file, restart Setup, and press F6 when you are prompted.<br /><br /><br />4.1.2 Using a Custom HAL File<br /><br />If you have a custom Hardware Abstraction Layer (HAL) file supplied by your computer manufacturer, before you begin Setup, locate the floppy disk or other medium containing the file. During the early part of Setup, a line at the bottom of the screen prompts you to press F6: at this time press F5 (not F6). After you press F5, follow the prompts to include your HAL file in the Setup process.<br /><br />4.2 Methods for Starting Setup for a New Installation<br /><br />The sections that follow, "Starting a New Installation from a CD" and "Starting a New Installation from a Network," explain how to start Setup for a new installation. <br /><br /><br />4.2.1 Starting a New Installation from a CD<br /><br />If you use the Setup CD, you have several options for starting Setup, as explained in the following procedures:<br /><br />Note: If you are running Setup on a computer running Microsoft Windows 3.x or MS-DOS, for best efficiency, use disk caching. Otherwise, the Setup process (started from Winnt.exe) could take a long time. To enable disk caching on a computer running Windows 3.x or MS-DOS, you can use SMARTDrive.<br /><br /><br />>>>TO START SETUP FROM THE CD ON A COMPUTER RUNNING MS-DOS<br /><br /> 1. Insert the CD in the drive.<br /><br /> 2. At the command prompt, type:<br /><br /> d:<br /><br /> where d is the drive letter of the CD-ROM drive.<br /><br /> 3. Type:<br /><br /> cd i386<br /><br /> 4. Type:<br /><br /> winnt<br /><br /> 5. Follow the Setup instructions.<br /><br /><br />>>>TO START SETUP FROM THE CD ON A COMPUTER RUNNING WINDOWS<br /><br />Before starting this procedure on a computer running Windows NT 4.0, apply Service Pack 5 or later.<br /><br /> 1. Insert the CD in the drive.<br /><br /> 2. To begin Setup, do one of the following:<br /><br /> * For a computer running any version of Windows other than Windows 3.x, wait for Setup to display a dialog box.<br /><br /> * For a computer running Windows 3.x, use File Manager to change to the CD-ROM drive and to change to the I386 directory, and then double-click Winnt.exe.<br /><br /> 3. Follow the Setup instructions.<br /><br />>>>TO START SETUP FOR A NEW INSTALLATION FROM THE CD<br /><br />Another way of using the Setup CD is to start the computer from the CD-ROM drive. This method applies only if you want to perform a new installation, not an upgrade. Using this method, you can perform an installation on a computer that does not have an operating system, although you can also use this method on computers that have Operating systems.<br /><br /> 1. Determine whether the computer on which you want to start Setup can be started from the CD-ROM drive and whether you want to perform a new installation (not an upgrade). Continue only if both are true.<br /><br /> 2. Insert the CD in the drive, and then restart the computer.<br /><br /> 3. Follow the instructions for your operating system to boot the computer from the CD.<br /><br /> 4. Wait for Setup to display a dialog box, and then follow the Setup instructions.<br /><br /><br />4.2.2 Starting a New Installation from a Network<br /><br />To install Windows Server 2003, Standard Edition, from a network, you either share the files directly from the CD or copy them to a shared folder. Then, you start the appropriate program to run Setup.<br /><br />>>>TO INSTALL WINDOWS SERVER 2003, STANDARD EDITION, FROM A NETWORK<br /><br /> 1. On a network server, share the installation files, either by inserting the CD and sharing the CD-ROM drive or by copying the files from the I386 folder on the CD to a shared folder.<br /><br /> 2. On the computer on which you want to install Windows Server 2003, Standard Edition, connect to the shared Setup files:<br /><br /> * If you are sharing the CD-ROM drive, connect to the shared drive and change to the I386 folder.<br /><br /> * If you are sharing a folder, connect to that folder.<br /><br /> 3. Find and run the appropriate file in the I386 directory of the CD or in the shared folder:<br /><br /> * From a computer running MS-DOS or Windows 3.x, run Winnt.exe.<br /><br /> * From a computer running Windows 95, Windows 98, Windows Millennium Edition, Windows NT with Service Pack 5 or later, Windows 2000, or Windows XP, run Winnt32.exe.<br /><br /> 4. Follow the Setup instructions.<br /><br />5.0 PLANNING FOR UNATTENDED SETUP<br /><br />This section provides general information about unattended Setup. <br /><br />To simplify the process of setting up a product in the Windows Server 2003 family on multiple computers, you can run Setup unattended. To do this, you create and use an answer file, a customized script that answers the Setup questions automatically. Then, you run Winnt32.exe or Winnt.exe with the appropriate options for unattended Setup. Choose the command according to the operating system that is running when you start unattended Setup:<br /><br /> * To start unattended Setup on a computer running MS-DOS or Windows 3.x, use Winnt.exe (with the appropriate options).<br /><br /> * To start unattended Setup on a computer running Windows 95, Windows 98, Windows Millennium Edition, Windows NT, Windows 2000, Windows XP, or a product in the Windows Server 2003 family, use Winnt32.exe (with the appropriate options). With Windows NT 4.0, before starting unattended Setup, apply Service Pack 5 or later.<br /><br /> * To view the command options available for Winnt.exe: On a computer running Windows 3.x or MS-DOS, insert the Setup CD for Windows Server 2003, Standard Edition, in the CD-ROM drive and open the command prompt. Then, change to the CD-ROM drive, change to the I386 directory, and type:<br /><br /> winnt /?<br /><br /> * To use an x86-based computer to view the command options available for Winnt32.exe: On a computer running Windows 95, Windows 98, Windows Millennium Edition, Windows NT, Windows 2000, Windows XP, or a product in the Windows Server 2003 family, insert the Setup CD for Windows Server 2003, Standard Edition, in the CD-ROM drive, and open the command prompt. Then, change to the CD-ROM drive, change to the I386 directory, and<br /> type:<br /><br /> winnt32 /?<br /><br /> * To use an Itanium architecture-based computer to view the command options available for Winnt32.exe: On an Itanium architecture-based computer running Windows XP 64-Bit Edition; the 64-bit version of Windows Server 2003, Enterprise Edition; or the 64-bit version of Windows Server 2003, Datacenter Edition, insert the Setup CD for the 64-bit version of the product in the CD-ROM drive and open the command prompt (click Start, click Run, and then type cmd). Then, change to the CD-ROM drive, change to the IA64 directory, and type:<br /><br /> winnt32 /?<br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br />6.0 ENTERING SERVER SETTINGS FOR A NEW INSTALLATION<br /><br />If you are upgrading, you can skip this section because Setup uses your previous settings.<br /><br />After you start Setup, a process begins in which necessary Setup files are copied to the hard disk. During this process, Setup displays dialog boxes that you can use to select various options.<br /><br /><br />Choosing or Creating a Partition for Windows Server 2003, Standard Edition<br /><br />During a new installation of Windows Server 2003, Standard Edition, a dialog box gives you the opportunity to create or specify a partition on which you want to install. You can create a partition from the available unpartitioned space, specify an existing partition, or delete an existing partition to create more unpartitioned disk space for the new installation. If you specify any action that will cause information to be erased, you will be prompted to confirm your choice.<br /><br />IMPORTANT: If you delete an existing partition, all data on that partition is erased. Performing a new installation of Windows Server 2003, Standard Edition, on a partition that contains another operating system overwrites the existing operating system.<br /><br /><br />Selecting Regional and Language Options<br /><br />You can set up Windows Server 2003, Standard Edition, to use multiple languages and regional options.<br /><br />If you select a European country or region in the list of countries/regions, or if you live in a country or region where the euro has been introduced, it is a good idea to verify that the<br />default currency settings in Regional and Language Options meet your needs. After you run Setup, you can modify these options by clicking Regional and Language Options in Control Panel.<br /><br />Personalizing Windows<br /><br />Enter your name and, as an option, your organization.<br /><br />If you select a European country or region in the list of countries/regions, or if you live in a country or region where the euro has been introduced, it is a good idea to verify that the default currency settings in Regional and Language Options meet your needs. After you run Setup, you can modify regional and language options by clicking Regional and Language Options in Control Panel.<br /><br />Choosing a Licensing Mode<br /><br />Select your client licensing mode. <br /><br />Entering Your Computer Name<br /><br />During Setup, in the Computer Name and Administrator Password dialog box, follow the instructions for entering your computer name. The recommended length for most languages is 15 characters or less. For languages that require more storage space per character, such as Chinese, Japanese, and Korean, the recommended length is 7 characters or less.<br /><br />It is recommended that you use only Internet-standard characters in the computer name. The standard characters are the numbers from 0 through 9, uppercase and lowercase letters from A through Z, and the hyphen (-) character. Computer names cannot consist entirely of numbers.<br /><br />If you are using DNS on your network, you can use a wider variety of characters, including Unicode characters and other nonstandard characters, such as the ampersand (&). Using nonstandard characters might affect the ability of non-Microsoft software to operate on your network. <br /><br />The maximum length for a computer name is 63 bytes. If the name is longer than 15 bytes (15 characters in most languages, 7 characters in some), computers running Windows NT Server 4.0 and earlier will recognize this computer by the first 15 bytes of the name only. In addition, there are additional configuration steps for a name that is longer than 15 bytes. <br /><br />If a computer is part of a domain, you must choose a computer name that is different from any other computer in the domain. To avoid name conflicts, the computer should be unique on the domain, workgroup, or network. If this computer is part of a domain, and it contains more than one operating system, you must use a unique computer name for each operating system that is installed. <br />For example, if the computer name is FileServerNT when the computer is started with Windows NT Server 4.0, the computer must have a different name, perhaps FileServerNew, when it is started with a product in the Windows Server 2003 family. This requirement also applies to a computer that contains multiple installations of the same operating system. <br /><br />Setting the Administrator Account Password<br /><br />During Setup, in the Computer Name and Administrator Password dialog box, type a password of up to 127 characters in the Administrator Password box. For the strongest system security, use a password of at least 7 characters, and use a mixture of uppercase and lowercase letters, numbers, and other characters, such as *, ?, or $.<br /><br />IMPORTANT: After Setup is completed, for best security, change the name of the Administrator account (it cannot be deleted) and keep a strong password on the account at all times. <br /><br /><br />Setting the Date and Time<br /><br />During Setup, in the Date and Time Settings dialog box, set the date, time, and time zone. If you want the system to automatically adjust for daylight saving time, select the "Automatically adjust clock for daylight saving changes" check box.<br /><br />You can change your computer's date and time after Setup is complete. If your computer is a member of a domain, your computer clock is probably synchronized automatically by a network time server. If your computer is not a member of a domain, you can synchronize your computer clock with an Internet time server.<br /><br /><br />Specifying Networking Settings<br /><br />You can specify networking information for TCP/IP or other protocols during Setup, or you can use typical settings and then make any necessary changes to your networking configuration after installation.<br /><br /><br />>>>TO ALLOW SETUP TO ASSIGN OR OBTAIN AN IP ADDRESS<br /><br /> * When you click "Typical settings" in the Networking Settings dialog box, Setup checks to see if there is a DHCP server on your network. If there is a DHCP server on your network, DHCP provides an IP address. If there is no DHCP server on your network, Setup will use a limited IP addressing option called Automatic Private IP Addressing (APIPA). On a server using APIPA, complete the network configuration after Setup, because a server using APIPA can communicate only with other computers using APIPA on the same network segment.<br /><br />>>>TO SPECIFY A STATIC IP ADDRESS AND SETTINGS NEEDED FOR DNS AND WINS<br /><br /> 1. During Setup, in the Networking Settings dialog box, click "Custom settings," and then click Next.<br /><br /> 2. In the Networking Components dialog box, click Internet Protocol (TCP/IP).<br /><br /> 3. Click Properties.<br /><br /> 4. In the Internet Protocol (TCP/IP) Properties dialog box, click "Use the following IP address."<br /><br /> 5. In IP address, Subnet mask and Default gateway, type the appropriate addresses.<br /><br /> 6. Under "Use the following DNS server addresses," type the address of a preferred DNS server and, optionally, an alternate DNS server.<br /><br /> If the local server is the preferred or alternate DNS server, type the same IP address as assigned in the previous step.<br /><br /> 7. If you will use a WINS server, click Advanced, and then click the WINS tab in the Advanced TCP/IP Settings dialog box to add the IP address of one or more WINS servers.<br /><br /> 8. Click OK in each dialog box, and continue with Setup.<br /><br /><br />Specifying the Workgroup or Domain Name<br /><br />A domain is a group of accounts and network resources that share a common directory database and set of security policies and might have security relationships with other domains. A workgroup is a more basic grouping, intended only to help users find objects such as printers and shared folders within that group. Domains make it easier for an administrator to control access to resources and keep track of users. <br /><br /><br />7.0 CONFIGURING YOUR SERVER<br /><br />When Setup is complete, the computer restarts. Setup has now completed the basic installation. Manage Your Server appears on the screen the first time you log on as the computer's administrator. You can use Manage Your Server to install and configure server roles, including file servers, print servers, Web and media servers, and networking and communications servers. You can start Manage Your Server at any time if you are logged on as an administrator. To start Manage Your Server, click Start, and then either click Manage Your Server or point to All Programs, point to Administrative Tools, and then click Manage Your Server.<br /><br />Choosing Server Components<br /><br />You can use the Windows Components Wizard to select the appropriate components for your server. To use this wizard, after running Setup, click Start, and then click Control Panel. In Control Panel, double-click Add or Remove Programs, and then, on the left side of the dialog box, click Add/Remove Windows Components. With this wizard you<br />can choose and install individual components.<br /><br />8.0 PRODUCT ACTIVATION FOR PRODUCTS IN THE WINDOWS SERVER 2002 FAMILY<br /><br />After you install a product in the Windows Server 2003 family, if the product was purchased individually rather than through a volume licensing arrangement, you will have to activate the product unless your hardware manufacturer has reactivated it for you. Product activation is quick, simple, and unobtrusive, and it protects your privacy. It is designed to reduce software piracy (illegal copies of a product). Over time, reduced piracy means that the software industry can invest more in product development, quality, and support. This results in better products and more innovation for customers.<br /><br /><br />Software reminders<br /><br />Until you activate your product, it provides a reminder each time you log on and at common intervals until the end of the activation grace period stated in your End-User License Agreement (30 days is the typical grace period). If your activation grace period passes and you do not activate the product, your computer will continue to function, except that when you log on locally or log on through Remote Desktop for Administration (the new name for the Windows 2000 functionality known as Terminal Services in Remote Administration Mode), you will only be able to use the Activate Windows Wizard.<br /><br />Methods for activation<br /><br />After your operating system is installed, begin activation by clicking Start, and then clicking Activate Windows. (You can also click the key icon that appears in the lower right corner of the screen.) By following the instructions on the screen, you can activate through the Internet or by phone:<br /><br /> * Internet: When you activate through the Internet, your computer transmits coded information that shows that your product key is associated with your computer hardware. Activation is carried out through a secure server. A confirmation ID is passed back to your computer, automatically activating your product. This process normally takes just a few seconds to complete. No personally identifiable information is required to activate your product.<br /><br /> * Phone: When you activate by phone, information on the screen guides you through a few simple steps. When you choose the country or region where you are located, a phone number (toll-free, wherever possible) appears on your screen. When you call the number, a customer service representative asks for the Installation ID that is displayed on your screen. The customer service representative enters that number into a secure database, confirms that the number represents a legally installed product, and provides a confirmation ID to you. Then, you type the confirmation ID into the spaces provided on the screen, and activation is complete.<br /><br />Reactivation (rarely needed)<br /><br />If you overhaul your computer by replacing a substantial number of hardware components (not just a few), the operating system might view your hardware as a completely different computer, not the one on which you activated. In this situation, you can call the telephone number displayed on the telephone activation screen, and, through a quick, simple process, you can reactivate your product.Rajesh Babu Rajamanickamhttp://www.blogger.com/profile/16036650026261051481noreply@blogger.com0tag:blogger.com,1999:blog-8180223062941193498.post-39217713767121984392008-01-14T16:43:00.000-08:002008-01-14T16:44:50.377-08:00NETWORKING FUNDAMENTALS<span style="font-weight:bold;">NETWORKING FUNDAMENTALS</span><br /><br />TABLE OF CONTENTS<br />1.0 General 3<br />1.1 Module Objectives: 3<br />1.2 Module Structure: 3<br />2.0 TCP/IP architectural model 4<br />2.1 Inter-networking 4<br />2.2 The TCP/IP protocol layers 6<br />3.0 TCP/IP applications 8<br />3.1 The client/server model 8<br />3.2 Bridges, routers, and gateways 9<br />4.0 The Open Systems Interconnection (OSI) Reference Model 10<br />4.1 The IP Address and Classes 11<br />4.2 IP addressing 11<br />5.0 Domain & Workgroup Models 13<br />5.1 Workgroups 13<br />5.2 Domains 14<br />6.0 Directory and Naming protocols 16<br />6.1 Domain Name System (DNS) 16<br />7.0 The Hierarchical Namespace 17<br />7.1 Fully Qualified Domain Names (FQDNs) 17<br />7.2 Generic domains 18<br />7.3 Country domains 18<br />7.4 Mapping domain names to IP addresses 19<br />7.5 Mapping IP addresses to domain names – pointer queries 19<br />8.0 Virtual Private Network (VPN) 20<br />8.1 What Makes a VPN? 21<br />9.0 VMWare Work Station 23<br />9.1 What Is VMware Workstation? 23<br />9.2 How Is VMware Workstation Used? 23<br />9.3 How Does VMware Workstation Work? 23<br />9.4 Why Does Business Need VMware Workstation? 24<br />10.0 GENERAL HARDWARE ORIENTED SYSTEM TRANSFER 27<br />10.1 Comprehensive PC management for OS deployment, software distribution, user-state migration, back-up and disaster recovery 27<br />10.2 Centralized Management and Remote Capabilities 27<br />10.3 Benefit From Several New PC Change-Management Capabilities 27<br />10.4 Support Today’s Latest Technologies 27<br />10.5 Clone multiple target PCs using multicasting 28<br />10.6 Typical usage examples 29<br />10.7 Upgrade networked workstations 29<br />11.0 Unit Summary 30<br />11.1 Exercise 30<br /><br />1.0 General<br />The following course will give you an understanding of various networking concepts. Basic understanding of networking concepts is essential for any testing engineer, to enable him to work better in a network-environment. These concepts prove to be useful in client/server-based projects, where many network issues need to address during testing.<br />Good understanding of networking is also essential in performance and load testing.<br />1.1 Module Objectives:<br />At the end of this Session you will:<br /><br /> Be able to define and understand TCP/IP protocol fundamentals<br /> Be able to define TCP/IP applications.<br /> Understand the Concept of Domains in Networking<br /> Understand VM Ware and VPN concepts.<br /><br />1.2 Module Structure:<br /><br />S.no Topic <br />1 TCP /IP Model 1<br />2 OSI Model 1<br />3 Domain and Workgroup Model 2<br />4 VPN 2<br />5 VM Ware 2<br /> Total Duration 8<br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br />2.0 TCP/IP architectural model<br />The TCP/IP protocol suite is so named for two of its most important protocols:<br /><br />Transmission Control Protocol (TCP) and Internet Protocol (IP). A less used name for it is the Internet Protocol Suite, which is the phrase used in official Internet standards documents. We use the more common, shorter term, TCP/IP, to refer to the entire protocol suite in this book.<br />2.1 Inter-networking<br />The main design goal of TCP/IP was to build an interconnection of networks, referred to as an inter-network, or Internet, that provided universal communication services over heterogeneous physical networks. The clear benefit of such an inter-network is the enabling of communication between hosts on different networks, perhaps separated by a large geographical area. The words inter-network and Internet is simply a contraction of the phrase interconnected network. However, when written with a capital "I", the Internet refers to the worldwide set of interconnected networks. Hence, the Internet is an Internet, but the reverse does not apply. <br />The Internet is sometimes called the connected Internet.<br />The Internet consists of the following groups of networks:<br /><br /> Backbones: Large networks that exist primarily to interconnect other networks. Currently the backbones are NSFNET in the US, EBONE in Europe, and large commercial backbones.<br /><br /> Regional networks connecting, for example, universities and colleges. Commercial networks providing access to the backbones to subscribers and networks owned by commercial organizations for internal use that also have connections to the Internet. <br /><br /> Local networks, such as campus-wide university networks. In most cases, networks are limited in size by the number of users that can belong to the network, by the maximum geographical distance that the network can span, or by the applicability of the network to certain environments. For example, an Ethernet network is inherently limited in terms of geographical size. Hence, the ability to interconnect a large number of networks in some hierarchical and organized fashion enables the communication of any two hosts belonging to this inter-network. <br /><br /><br /><br /><br />Figure 1 shows two examples of Internets. Each is comprised of two or more physical networks.<br /> <br />Another important aspect of TCP/IP inter-networking is the creation of a standardized abstraction of the communication mechanisms provided by each type of network. Each physical network has its own technology-dependent communication interface, in the form of a programming interface that provides basic communication functions (primitives). <br /><br />TCP/IP provides communication services that run between the programming interface of a physical network and user applications. It enables a common interface for these applications, independent of the underlying physical network. The architecture of the physical network is therefore hidden from the user and from the developer of the application. <br /><br />The application need only code to the standardized communication abstraction to be able to function under any type of physical network and operating platform. As is evident in Figure 1, to be able to interconnect two networks, we need a computer that is attached to both networks and can forward data packets from one network to the other; such a machine is called a router. The term IP router is also used because the routing function is part of the Internet Protocol portion of the TCP/IP protocol suite <br /><br />To be able to identify a host within the inter-network, each host is assigned an address, called the IP address. When a host has multiple network adapters (interfaces), each interface has a unique IP address. <br /><br /><br />The IP address consists of two parts:<br />IP address = <network number><host number><br />The network number part of the IP address identifies the network within the Internet and is assigned by a central authority and is unique throughout the Internet. The authority for assigning the host number part of the IP address resides with the organization that controls the network identified by the network number. <br />2.2 The TCP/IP protocol layers<br />Like most networking software, TCP/IP is modeled in layers. This layered representation leads to the term protocol stack, which refers to the stack of layers in the protocol suite. It can be used for positioning (but not for functionally comparing) the TCP/IP protocol suite against others, such as Systems Network Architecture (SNA) and the Open System Interconnection (OSI) model. Functional comparisons cannot easily be extracted from this, as there are basic differences in the layered models used by the different protocol suites.<br />By dividing the communication software into layers, the protocol stack allows for division of labor, ease of implementation and code testing, and the ability to develop alternative layer implementations. Layers communicate with those above and below via concise interfaces. In this regard, a layer provides a service for the layer directly above it and makes use of services provided by the layer directly below it. <br />For example, the IP layer provides the ability to transfer data from one host to another without any guarantee to reliable delivery or duplicate suppression. Transport protocols such as TCP make use of this service to provide applications with reliable, in-order, data stream delivery. Figure 2 shows how the TCP/IP protocols are modeled in four layers.<br /> <br />These layers include:<br />Application layer: The program that uses TCP/IP for communication provides the application layer. An application is a user process cooperating with another process usually on a different host (there is also a benefit to application communication within a single host). Examples of applications include Telnet and the File Transfer Protocol (FTP). Port numbers and sockets define the interface between the application and transport layers<br />Transport layer The transport layer provides the end-to-end data transfer by delivering data from an application to its remote peer. Multiple applications can be supported simultaneously. The most-used transport layer protocol is the Transmission Control Protocol (TCP), which provides connection-oriented reliable data delivery, duplicate data suppression, congestion control, and flow control.<br /><br />Another transport layer protocol is the User Datagram Protocol; it provides connectionless, unreliable, best-effort service. As a result, applications using UDP as the transport protocol have to provide their own end-to-end integrity, flow control, and congestion control, if it is so desired. Usually, UDP is used by applications that need a fast transport mechanism and can tolerate the loss of some data.<br />Inter-network layer The inter-network layer, also called the Internet layer or the network layer provides the "virtual network" image of an Internet (this layer shields the higher levels from the physical network architecture below it). <br /><br />Internet Protocol (IP) is the most important protocol in this layer. It is a connectionless protocol that doesn't assume reliability from lower layers. IP does not provide reliability, flow control, or error recovery. These functions must be provided at a higher level. IP provides a routing function that attempts to deliver transmitted messages to their destination.<br /><br />A message unit in an IP network is called an IP datagram. This is the basic unit of information transmitted across TCP/IP networks. Other inter-network layer protocols are IP, ICMP, IGMP, ARP and RARP.<br />Network interface layer The network interface layer, also called the link layer or the data-link layer, is the interface to the actual network hardware. This interface may or may not provide reliable delivery, and may be packet or stream oriented. In fact, TCP/IP does not specify any protocol here, but can use almost any network interface available, which illustrates the flexibility of the IP layer. Examples are IEEE 802.2, X.25 (which is reliable in itself), ATM, FDDI, and even SNA. TCP/IP specifications do not describe or standardize any network layer protocols per se; they only standardize ways of accessing those protocols from the inter-network layer.<br /><br /><br />A more detailed layering model is included in Figure 3.<br /> <br />3.0 TCP/IP applications<br />The highest-level protocols within the TCP/IP protocol stack are application protocols. They communicate with applications on other Internet hosts and are the user-visible interface to the TCP/IP protocol suite. <br />All application protocols have some characteristics in common:<br />• They can be user-written applications or applications standardized and shipped with the TCP/IP product. Indeed, the TCP/IP protocol suite includes application protocols such as:<br />- TELNET for interactive terminal access to remote Internet hosts.<br />- FTP (file transfer protocol) for high-speed disk-to-disk file transfers.<br />- SMTP (simple mail transfer protocol) as an Internet mailing system.<br />These are some of the most widely implemented application protocols, but many others exist. Each particular TCP/IP implementation will include a lesser or greater set of application protocols.<br />• They use either UDP or TCP as a transport mechanism. Remember that UDP is unreliable and offers no flow-control, so in this case, the application has to provide its own error recovery, flow control, and congestion control functionality. It is often easier to build applications on top of TCP because it is a reliable stream, connection-oriented, congestion-friendly, flow control-enabled protocol. As a result, most application protocols will use TCP, but there are applications built on UDP to achieve better performance through reduced protocol overhead.<br />• Most applications use the client/server model of interaction.<br />3.1 The client/server model<br /><br />TCP is a peer-to-peer, connection-oriented protocol. There are no master/slave relationships. The applications, however, typically use a client/server model for communications.<br />A server is an application that offers a service to Internet users; a client is a requester of a service. An application consists of both a server and a client part, which can run on the same or on different systems. Users usually invoke the client part of the application, which builds a request for a particular service and sends it to the server part of the application using TCP/IP as a transport vehicle.<br />The server is a program that receives a request, performs the required service and sends back the results in a reply. A server can usually deal with multiple requests and multiple requesting clients at the same time. <br /> <br />Most servers wait for requests at a well-known port so that their clients know which port (and in turn, which application) they must direct their requests.<br />The client typically uses an arbitrary port called an ephemeral port for its communication. Clients that wish to communicate with a server that does not use a well-known port must have another mechanism for learning to which port they must address their requests. This mechanism might employ a registration service such as port map, which does use a well-known port.<br />3.2 Bridges, routers, and gateways<br />There are many ways to provide access to other networks. In an inter-network, this is done using routers. In this section, we distinguish between a router, a bridge and a gateway for allowing remote network access. <br />Bridge Interconnects LAN segments at the network interface layer level and forwards frames between them. A bridge performs the function of a MAC relay, and is independent of any higher layer protocol (including the logical link protocol). It provides MAC layer protocol conversion, if required. A bridge is said to be transparent to IP. That is, when an IP host sends an IP datagram to another host on a network connected by a bridge, it sends the datagram directly to the host and the datagram "crosses" the bridge without the sending IP host being aware of it.<br />Router Interconnects networks at the inter-network layer level and routes packets between them. The router must understand the addressing structure associated with the networking protocols it supports and take decisions on whether, or how, to forward packets. Routers are able to select the best transmission paths and optimal packet sizes. The basic routing function is implemented in the IP layer of the TCP/IP protocol stack, so any host or workstation running TCP/IP over more than one interface could, in theory and also with most of today's TCP/IP implementations, forward IP datagram. However, dedicated routers provide much more sophisticated routing than the minimum functions implemented by IP. Because IP provides this basic routing function, the term "IP router," is often used. Other, older terms for router are "IPGateway," "Internet gateway," and "gateway." The term gateway is now normally used for connections at a higher layer than the inter-network layer.<br />A router is said to be visible to IP. That is, when a host sends an IP datagram to another host on a network connected by a router, it sends the datagram to the router so that it can forward it to the target host.<br /><br />Gateway Interconnects networks at higher layers than bridges and routers. A gateway usually supports address mapping from one network to another, and may also provide transformation of the data between the environments to support end-to-end application connectivity. Gateways typically limit the interconnectivity of two networks to a subset of the application protocols supported on either one. For example, a VM host running TCP/IP may be used as an SMTP/RSCS mail gateway. A gateway is said to be opaque to IP. That is, a host cannot send an IP datagram through a gateway; it can only send it to a gateway. <br /><br />The higher-level protocol information carried by the datagrams is then passed on by the gateway using whatever networking architecture is used on the other side of the gateway. Closely related to routers and gateways is the concept of a firewall, or firewall gateway, which is used to restrict access from the Internet or some un-trusted network to a network or group of networks, controlled by an organization for security reasons.<br /><br /><br /><br /><br /><br />4.0 The Open Systems Interconnection (OSI) Reference Model<br />The OSI (Open Systems Interconnect) Reference Model (ISO 7498) defines a seven-layer model of data communication with physical transport at the lower layer and application protocols at the upper layers. This model, shown in Figure 5, is widely accepted as a basis for the understanding of how a network protocol stack should operate and as a reference tool for comparing network stack implementation.<br /> <br />Each layer provides a set of functions to the layer above and, in turn, relies on the functions provided by the layer below. Although messages can only pass vertically through the stack from layer to layer, from a logical point of view, each layer communicates directly with its peer layer on other nodes. <br />The seven layers are:<br />Application Network applications such as terminal emulation and file transfer<br />Presentation Formatting of data and encryption <br />Session Establishment and maintenance of sessions<br />Transport Provision of reliable and unreliable end-to-end delivery<br />Network Packet delivery, including routing<br />Data Link Framing of units of information and error checking<br />Physical Transmission of bits on the physical hardware<br /><br />In contrast to TCP/IP, the OSI approach started from a clean slate and defined standards, adhering tightly to their own model, using a formal committee process without requiring implementations. Internet protocols use a less formal engineering approach, where anybody can propose and comment on RFCs, and implementations are required to verify feasibility. The<br />OSI protocols developed slowly, and because running the full protocol stack is resource intensive, they have not been widely deployed, especially in the desktop and small computer market. <br /><br /><br />In the meantime, TCP/IP and the Internet was developing rapidly, with deployment occurring at a very high rate.<br />As with all other communications protocol, TCP/IP is composed of layers:<br />IP - is responsible for moving packet of data from node to node. IP forwards each packet based on a four byte destination address (the IP number). The Internet authorities assign ranges of numbers to different organizations. The organizations assign groups of their numbers to departments. IP operates on gateway machines that move data from department to organization to region and then around the world.<br />TCP - is responsible for verifying the correct delivery of data from client to server. Data can be lost in the intermediate network. TCP adds support to detect errors or lost data and to trigger retransmission until the data is correctly and completely received.<br />Sockets - is a name given to the package of subroutines that provide access to TCP/IP on most systems.<br />4.1 The IP Address and Classes<br />4.1.1.1 Hosts and networks<br />IP addressing is based on the concept of hosts and networks. A host is essentially anything on the network that is capable of receiving and transmitting IP packets on the network, such as a workstation or a router. It is not to be confused with a server: servers and client workstations are all IP hosts.<br /><br />The hosts are connected together by one or more networks. The IP address of any host consists of its network address plus its own host address on the network. IP addressing, unlike, say, IPX addressing, uses one address containing both network and host address. How much of the address is used for the network portion and how much for the host portion varies from network to network.<br />4.2 IP addressing<br />An IP address is 32 bits wide, and as discussed, it is composed of two parts: the network number, and the host number [1, 2, 3]. By convention, it is expressed as four decimal numbers separated by periods, such as "200.1.2.3" representing the decimal value of each of the four bytes. Valid addresses thus range from 0.0.0.0 to 255.255.255.255, a total of about 4.3 billion addresses. The first few bits of the address indicate the Class that the address belongs to:<br />Class Prefix Network Number Host Number<br />A 0 Bits 0-7 Bits 8-31<br />B 10 Bits 1-15 Bits 16-31<br />C 110 Bits 2-24 Bits 25-31<br />D 1110 N/A <br />E 1111 N/A <br /><br />The bits are labeled in network order, so that the first bit is bit 0 and the last is bit 31, reading from left to right. Class D addresses are multicast, and Class E is reserved. The range of network numbers and host numbers may then be derived:<br />Class Range of Net Numbers Range of Host Numbers<br />A 0 to 126 0.0.1 to 255.255.254<br />B 128.0 to 191.255 0.1 to 255.254<br />C 192.0.0 to 254.255.255 1 to 254<br />Any address starting with 127 is a loop back address and should never be used for addressing outside the host. A host number of all binary 1's indicates a directed broadcast over the specific network. For example, 200.1.2.255 would indicate a broadcast over the 200.1.2 network. If the host number is 0, it indicates "this host". If the network number is 0, it indicates "this network" [2]. All the reserved bits and reserved addresses severely reduce the available IP addresses from the 4.3 billion theoretical maximum. Most users connected to the Internet will be assigned addresses within Class C, as space is becoming very limited. This is the primary reason for the development of IPv6, which will have 128 bits of address space.<br />5.0 Domain & Workgroup Models<br />Before PCs, the network model revolved around a central computer server and terminals that users could access. These terminals had no autonomous computing power of their own. They provided the user only with an interactive view of the server.<br />With the invasion of personal computers in the late 1980s, people began to store their files on the local hard drive space available on their PC. This however, proposed a problem to sharing files: something that was trivial when everyone was logging into the same machine (that is, mainframe) from their terminal. People wanted to store their files locally so that they would be accessible during a server outage (something which they had no control over) while still allowing other users to access the files from their own computer. This PC-centric distributed model was named peer networking because all the machines were equally likely to be clients and servers and could operate in both modes.<br />5.1 Workgroups<br />The idea of a workgroup goes hand in hand with the concept of peer networking. A workgroup is a unit of people who share responsibilities to achieve a common goal. Each one has to pull his or her own weight. A computer workgroup is no different. As you will see, a computer workgroup can be used in two contexts.<br />The first concept of a workgroup is as an administrative group of machines that do not share user and group account information. Remember step 2 of the SMB protocol overview? That is when the client sends a username and some proof of identity. The question then becomes "Who will validate the request?" Each machine has a separate and local copy of an account database. Therefore, all validation is done locally. Remember that this is called peer networking, or sometimes peer-to-peer networking, because all machines are essentially equal. Each PC has the capability to serve files and printers as well as validate access requests. This equality does not mean that all machines perform the functions equally well, of course.<br />Figure 6 illustrates the idea of the workgroup authentication model. The client, shown on bottom, attempts to access the disk share on SERVER1. SERVER1 alone is responsible for validating the session setup against its local account database, whatever that might be. When the client attempts to access the printer share on SERVER2, that server is responsible for validating the connection. The outcome is entirely distinct from the outcome of the connection to SERVER1. Each server has a local distinct account database that is unrelated to the others.<br />The motivation for network browsing is the manner in which resources appear and disappear from the network as hosts start and stop. Unlike a central computing model, such as a mainframe or terminal solution, where everything is located on one machine, it is much more difficult to survey a large number of hosts that can come on and off the network at the whim of the PC's owner. Browsing allows users to view the current servers and resources available dynamically. In this context, a domain and a workgroup are equivalent.<br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br />Figure 6<br /> <br />5.2 Domains<br /><br />A domain is similar to a workgroup with one major exception. In a domain, there is a central authentication server that maintains the domain's user and group accounts. Resources in the domain are accessed regardless of what machine they are located on by validating against the domain controller. This is still peer networking because all machines maintain the capability to serve files and printers and perform the necessary validation. The difference is that the validation is performed against a remote account database located on the domain controller.<br />Domains grew out of the need to get rid of the mass of passwords that was necessary when every machine had its own local account database. The solution provided users with one account that could allow access to all resources if so desired.<br />Figure 7 shows a sample connection to a server that is a member of some domain. First, the client sends the connection request containing the user information to SERVER1 asking to access some disk share. SERVER1 then sends a validation request to the domain controller (DC). The validation request contains the user information originally sent by the client. If the DC successfully validates the user, it sends a positive response to SERVER1 that then sends a positive connection response back to the client. This means, assuming that the access control mechanisms such as permission lists allow it, that a client can connect to any server in the domain using a single username and password. In Figure 7 the client needed a separate username and password to connect to each server.<br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br />Figure 7<br /> <br /><br /><br /><br /><br /><br /><br /><br /><br /><br />6.0 Directory and Naming protocols<br />The TCP/IP protocol suite contains many applications, but these generally take the form of network utilities. Although these are obviously important to a company using a network, they are not, in themselves, the reason why a company invests in a network in the first place. <br />The network exists to provide access for users, who may be both local and remote, to a company's business applications, data, and resources, which may be distributed across many servers throughout building, city, or even the world. Those servers may be running on hardware from many different vendors and on several different operating systems. This chapter looks at methods of accessing resources and applications in a distributed network.<br />6.1 Domain Name System (DNS)<br />The Domain Name System is a standard protocol with STD number 13. Its status is recommended. It is described in RFC 1034 and RFC 1035. This section explains the implementation of the Domain Name System, and the implementation of name servers. The early Internet configurations required users to use only numeric IP addresses. Very quickly, this evolved to the use of symbolic host names. For example, instead of typing TELNET 128.12.7.14, one could type TELNET eduvm9, and eduvm9 is then translated in some way to the IP address 128.12.7.14. <br /><br />This introduces the problem of maintaining the mappings between IP addresses and high-level machine names in a coordinated and centralized way. Initially, host names to address mappings were maintained by the Network Information Center (NIC) in a single file (HOSTS.TXT), which was fetched by all hosts using FTP. This is called a flat namespace. Due to the explosive growth in the number of hosts, this mechanism became too cumbersome (consider the work involved in the addition of just one host to the Internet) and was replaced by a new concept: Domain Name System. <br />Hosts can continue to use a local flat namespace (the HOSTS.LOCAL file) instead of or in addition to the Domain Name System, but outside small networks, the Domain Name System is practically essential. The Domain Name System allows a program running on a host to perform the mapping of a high-level symbolic name to an IP address for any other host without the need for every host to have a complete database of host names.<br /><br /><br /><br /><br />7.0 The Hierarchical Namespace<br />Consider the internal structure of a large organization. As the chief executive cannot do everything, the organization will probably be partitioned into divisions, each of them having autonomy within certain limits. Specifically, the executive in charge of a division has authority to make direct decisions, without permission from his or her chief executive. Domain names are formed in a similar way, and will often reflect the hierarchical delegation of authority used to assign them. <br /><br />For example, consider the name:<br />small.itso.raleigh.ibm.com<br />Here, itso.raleigh.ibm.com is the lowest level domain name, a sub-domain of raleigh.ibm.com, which again is a sub-domain of ibm.com, a sub-domain of com. We can also represent this naming concept by a hierarchical tree. <br />7.1 Fully Qualified Domain Names (FQDNs)<br />When using the Domain Name System, it is common to work with only a part of the domain hierarchy, for example, the ral.ibm.com domain. The Domain Name System provides a simple method of minimizing the typing necessary in this circumstance. If a domain name ends in a dot (for example, wtscpok.itsc.pok.ibm.com.), it is assumed to be complete. This is termed a fully qualified domain name (FQDN) or an absolute domain name.<br />However, if it does not end in a dot (for example, wtscpok.itsc), it is incomplete and the DNS resolver (see below) may complete this, for example, by appending a suffix such as .pok.ibm.com to the domain name. The rules for doing this are implementation-dependent and locally configurable.<br /><br /><br /><br /><br /><br />7.2 Generic domains<br />The three-character top-level names are called the generic domains or the organizational domains. Table 4 shows some of the top-level domains of today's Internet domain namespace.<br /> <br />Since the Internet began in the United States, the organization of the hierarchical namespace initially had only U.S. organizations at the top of the hierarchy, and it is still largely true that the generic part of the namespace contains US organizations. However, only the .gov and .mil domains are restricted to the US. <br />At the time of writing, the U.S. Department of Commerce – National Telecommunications and Information Administration is looking for a different organization for .us domains. As a result of this, it has been decided to change the status of the Internet Assigned Numbers Authority (IANA), which will no longer be funded and run by the U.S. Government. A new non-profit organization with an international Board of Directors will be funded by domain registries instead. On the other hand, there are some other organizations that have already begun to register new top-level domains.<br />For current information, see the IANA Web site at: http://www.iana.org<br /><br />7.3 Country domains<br />There are also top-level domains named for the each of the ISO 3166 international 2-character country codes (from ae for the United Arab Emirates to zw for Zimbabwe). These are called the country domains or the geographical domains. Many countries have their own second-level domains underneath which parallel the generic top-level domains. <br /><br />For example, in the United Kingdom, the domains equivalent to the generic domains .com and .edu are .co.uk and .ac.uk (ac is an abbreviation for academic). There is a .us top-level domain, which is organized geographically by state (for example, .ny.us refers to the state of New York). See RFC 1480 for a detailed description of the .us domain<br /><br /><br /><br /><br /><br />7.4 Mapping domain names to IP addresses<br />The mapping of names to addresses consists of independent, cooperative systems called name servers. A name server is a server program that holds a master or a copy of a name-to-address mapping database, or otherwise points to a server that does, and that answers requests from the client software, called a name resolver. <br /><br />Conceptually, all Internet domain servers are arranged in a tree structure that corresponds to the naming hierarchy in Figure 132 on page 285. Each leaf represents a name server that handles names for a single sub-domain. Links in the conceptual tree do not indicate physical connections. Instead, they show which other name server a given server can contact.<br />7.5 Mapping IP addresses to domain names – pointer queries<br />The Domain Name System provides for a mapping of symbolic names to IP addresses and vice versa. While it is a simple matter in principle to search the database for an IP address with its symbolic name (because of the hierarchical structure), the reverse process cannot follow the hierarchy. Therefore, there is another namespace for the reverse mapping. It is found in the domain in-addr.arpa (arpa is used because the Internet was originally the ARPAnet). <br /><br />IP addresses are normally written in dotted decimal format, and there is one layer of domain for each hierarchy. However, because domain names have the least-significant parts of the name first, but dotted decimal format has the most significant bytes first, the dotted decimal address is shown in reverse order. For example, the domain in the domain name system corresponding to the IP address 129.34.139.30 is 30.139.34.129.in-addr.arpa. Given an IP address, the Domain Name System can be used to find the matching host name. A domain name query to find the host names associated with an IP address is called a pointer query.<br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br />8.0 Virtual Private Network (VPN)<br />The world has changed a lot in the last couple of decades. Instead of simply dealing with local or regional concerns, many businesses now have to think about global markets and logistics. Many companies have facilities spread out across the country or around the world, and there is one thing that all of them need: A way to maintain fast, secure and reliable communications wherever their offices are. <br />Until fairly recently, this has meant the use of leased lines to maintain a wide area network (WAN). Leased lines, ranging from ISDN (integrated services digital network, 128 Kbps) to OC3 (Optical Carrier-3, 155 Mbps) fiber, provided a company with a way to expand its private network beyond its immediate geographic area. A WAN had obvious advantages over a public network like the Internet when it came to reliability, performance and security. But maintaining a WAN, particularly when using leased lines, can become quite expensive and often rises in cost as the distance between the offices increases. <br />As the popularity of the Internet grew, businesses turned to it as a means of extending their own networks. First came intranets, which are password-protected sites designed for use only by company employees. Now, many companies are creating their own VPN (virtual private network) to accommodate the needs of remote employees and distant offices. <br /> <br /><br />A typical VPN might have a main LAN at the corporate headquarters of a company, other LANs at remote offices or facilities and individual users connecting from out in the field.<br />Basically, a VPN is a private network that uses a public network (usually the Internet) to connect remote sites or users together. Instead of using a dedicated, real-world connection such as leased line, a VPN uses "virtual" connections routed through the Internet from the company's private network to the remote site or employee. In this article, you will gain a fundamental understanding of VPNs, and learn about basic VPN components, technologies, tunneling and security. <br /><br /><br /><br />8.1 What Makes a VPN?<br /><br />A well-designed VPN can greatly benefit a company. For example, it can: <br />• Extend geographic connectivity <br />• Improve security <br />• Reduce operational costs versus traditional WAN <br />• Reduce transit time and transportation costs for remote users <br />• Improve productivity <br />• Simplify network topology <br />• Provide global networking opportunities <br />• Provide telecommuter support <br />• Provide broadband networking compatibility <br />• Provide faster ROI (return on investment) than traditional WAN <br />• What features are needed in a well-designed VPN? It should incorporate: <br />• Security <br />• Reliability <br />• Scalability <br />• Network management <br />• Policy management <br />There are three types of VPN. In the next couple of sections, we'll describe them in detail. <br />Remote-Access VPN<br />There are two common types of VPN. Remote-access, also called a virtual private dial-up network (VPDN), is a user-to-LAN connection used by a company that has employees who need to connect to the private network from various remote locations. Typically, a corporation that wishes to set up a large remote-access VPN will outsource to an enterprise service provider (ESP).<br /><br /> The ESP sets up a network access server (NAS) and provides the remote users with desktop client software for their computers. The telecommuters can then dial a toll-free number to reach the NAS and use their VPN client software to access the corporate network. <br />A good example of a company that needs a remote-access VPN would be a large firm with hundreds of sales people in the field. Remote-access VPNs permit secure, encrypted connections between a company's private network and remote users through a third-party service provider. <br /> <br /><br />Examples of the three types of VPN<br />Site-to-Site VPN<br />Through the use of dedicated equipment and large-scale encryption, a company can connect multiple fixed sites over a public network such as the Internet. Site-to-site VPNs can be one of two types: <br />Intranet-based - If a company has one or more remote locations that they wish to join in a single private network, they can create an intranet VPN to connect LAN to LAN. <br />Extranet-based - When a company has a close relationship with another company (for example, a partner, supplier or customer), they can build an extranet VPN that connects LAN to LAN, and that allows all of the various companies to work in a shared environment. <br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br />9.0 VMWare Work Station<br />9.1 What Is VMware Workstation? <br /><br />VMware Workstation is powerful virtual machine software for developers and system administrators who want to revolutionize software development, testing and deployment in their enterprise. Shipping for more than five years and winner of over a dozen major product awards, VMware Workstation enables software developers to develop and test the most complex networked server-class applications running on Microsoft Windows, Linux or NetWare all on a single desktop. <br /><br />Essential features such as virtual networking, live snapshots, drag and drop and shared folders, and PXE support make VMware Workstation the most powerful and indispensable tool for enterprise IT developers and system administrators. <br />9.2 How Is VMware Workstation Used? <br /><br />With over five years of proven success and millions of users, VMware Workstation improves efficiency, reduces costs and increases flexibility and responsiveness. Installing VMware Workstation on the desktop is the first step to transforming your IT infrastructure into virtual infrastructure. <br />VMware Workstation is used in the enterprise to:<br /><br />• Streamline software development and testing operations. <br />• Accelerate application deployments. <br />• Ensure application compatibility and perform operating system migrations. <br />9.3 How Does VMware Workstation Work? <br /><br />VMware Workstation works by enabling multiple operating systems and their applications to run concurrently on a single physical machine. These operating systems and applications are isolated in secure virtual machines that co-exist on a single piece of hardware. The VMware virtualization layer maps the physical hardware resources to the virtual machine's resources, so each virtual machine has its own CPU, memory, disks, I/O devices, etc. Virtual machines are the full equivalent of a standard x86 machine. <br />VMware Workstation enables enterprise software developers to develop and test the most complex networked server-class applications running on Windows, Linux or NetWare all on a single desktop. <br /><br />With VMware Workstation---<br />Build complex networks -- and develop, test, and deploy new applications -- all on a single computer.<br /> Leverage the portability of virtual machines to easily share development environments and pre-packaged operating system/application testing configurations without risk. <br />Add or change operating systems without repartitioning disks or rebooting. <br />Run new operating systems and legacy applications on one computer. <br /><br /><br /><br /><br />9.4 Why Does Business Need VMware Workstation? <br /><br />Since its launch in 1999, VMware Workstation has revolutionized the way software and IT infrastructure is developed and has become the de facto standard for IT professionals and developers worldwide. If your business is looking to simplify and accelerate development, testing and deployment of software and IT infrastructure, VMware Workstation is essential. <br />When you deploy VMware Workstation in your environment you will: <br /><br />• Shorten development cycles. <br />• Reduce problem resolution time. <br />• Increase productivity. <br />• Accelerate time-to-market. <br />• Improve project quality. <br /><br /> <br />Fig: VMware Workstation Architecture<br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br />Why Use VMware Workstation?<br />Usage Scenarios Benefits<br /> Streamline Software Development and Testing<br /> <br />Create multiple development and testing environments on a single system<br /> <br />Build mission-critical Windows- and/or Linux-based applications<br /> <br />Archive test environments on file servers and restore them quickly, as needed<br /> <br />Test new application updates, OS patches and service packs on a single PC computer<br /> <br /> <br />Accelerate development cycles and reduce time to market<br /> <br />Reduce hardware costs by 50-60%<br /> <br />Reduce costly configuration and set-up time by 25-55%, freeing time to do important development and testing<br /> <br />Improve project quality with more rigorous testing<br /> <br />Eliminate costly deployment and maintenance problems<br /><br /> Accelerate Application Deployment<br /> <br />Test, configure and provision enterprise-class servers as VMware Workstation VMs and then deploy them on a physical server or VMware GSX or VMware ESX server<br /> <br />Create a whole network of applications composed of multiple computers and multiple network switches in a set of virtual machines and test them without affecting the production network<br /> <br />Test physical to virtual migrations for server consolidation and legacy application migrations<br /> <br /> <br />Reduce hardware costs by 50-60%<br /> <br />Improve quality of deployments<br /> <br />Improve productivity<br /> <br />Reduce risk to corporate networks by creating complex, secure and isolated virtual networks that mirror enterprise networks<br /><br /> Ensure Application Compatibility and Perform Operating System Migration<br /> <br />Support legacy applications while migrating safely to a new operating system<br /> <br />Test new operating systems in secure, clean virtual machines prior to deployment<br /> <br />Eliminate the need to port legacy applications<br /> <br /> <br />Complete complex OS migration projects on time and on budget<br /> <br />Increase operations efficiency by up to 50%<br /> <br />Reduce desktop capital costs by 50-60%<br /> <br />Minimize end-user pain during transition<br /><br /><br /><br />10.0 GENERAL HARDWARE ORIENTED SYSTEM TRANSFER <br />10.1 Comprehensive PC management for OS deployment, software distribution, user-state migration, back-up and disaster recovery<br /><br />Managing today’s increasingly heterogeneous enterprise environments of connected and mobile PCs poses major challenges for IT managers. Primary among them is the need to control the costs of setting up new PCs, migrating user desktop settings, and deploying OS and application upgrades and updates. By enabling the remote management of routine tasks such as PC deployment, cloning, changes in configuration settings, user migration, and backup and recovery of disk images, Symantec Ghost streamlines the configuration and management of networked PCs, thereby dramatically reducing IT costs. <br />10.2 Centralized Management and Remote Capabilities <br />With Symantec Ghost8.0, administrators can deploy or restore an OS image or application onto a PC in minutes and then migrate individual user settings and profiles to customize the PC. Robust centralized management and remote capabilities boost IT productivity and help lower the total cost of ownership for networked PCs and workstations. From the Symantec Ghost console, IT managers can remotely clone any Windows NT or Windows 2000 workstation. And they can quickly deploy whole application packages or specific PC changes such as registry changes or desktop settings. Plus, administrators can migrate user “personalities” (including PC settings and data) and also remotely clone multiple workstations, then quickly configure critical workstation data such as TCP/IP settings and machine, workgroup, and domain names—all from the Ghost central console. <br />10.3 Benefit From Several New PC Change-Management Capabilities<br /> Several new features make the latest version of Symantec Ghost a more powerful, versatile, and compact PC change-management solution. User Migration allows administrators to remotely transfer user files, directories, and desktop and network settings between PCs. Incremental Backup enables the remote backup of only the most recent user changes. And AutoInstall Integration consolidates AutoInstall functionality within the central console, making the customization of software packages and the deployment of updates easy, while reducing the overall Symantec Ghost footprint. <br />10.4 Support Today’s Latest Technologies<br />Symantec Ghost supports Intel® Wired for Management and Pre-Boot eXecution (PXE) services —the standard industry guidelines for building advanced management capabilities into PCs. The all-new version also supports Microsoft’s System Preparation utility and is the only PC management tool that is Windows™ 2000 certified, making it the tool of choice for migrating to the latest operating system from Microsoft®. <br /><br /><br /><br /><br /><br /><br />10.5 Clone multiple target PCs using multicasting<br />The replication of a model workstation onto many computers can be a time-consuming task. One-to-one connections with a small number of computers is fast and efficient, but as the number of machines increases, the time for the overall completion of the entire replication task increases in proportion to the number of computers being cloned.<br />When Ghost is using a one-to-one approach for transferring information, each of the computer drives being replicated receives its own copy of information, and each of these copies needs to be passed through the same network channel. As the number of replications on the same network increases, the time for overall task completion increases due to multiple copies of information being sent through the common information channel.<br />Ghost Multicasting uses TCP/IP multicasting in conjunction with a reliable session protocol to provide one to many communication. Ghost Multicasting supports both Ethernet and Token Ring Networks and clears away the bottleneck of having multiple copies of data being passed through the network. Ghost multicasting includes support for the<br />Parallel Cable<br />Master<br />Slave<br />Multicasting of disk images and partition images, as well as automatic multicast server session starting options and image file creation. A multicasting session consists of one server, a single image file, and a group of similar Ghost clients requiring the identical disk or partition image. The session name is used by Ghost clients to indicate the session they are to join and listen to.<br />Ghost Multicasting Client is built into the Ghost application software. Ghost operates in conjunction with the Ghost Multicast Server application to provide a fast and easy way of replicating workstations.<br /> <br /><br /><br /><br />10.6 Typical usage examples<br />Ghost’s abilities to clone hard drives and partitions provide a flexible and powerful tool that can be used for anything from upgrading the hard drive in your PC at home, right through to managing organization-wide system configuration in large corporations.<br />10.7 Upgrade networked workstations<br />Your company has decided to upgrade from Windows NT to Windows 2000. You have 25 workstations to configure, and only a day to do it. With Ghost, you can create a model system with all of the necessary software installed (office software, web browser, etc.), and then save an image of the system to a network server.<br /> 1. Use Ghost to load the image on to other machines over the network. If you are using Ghost Multicast Server<br />2. Ghost Multicast Server receives the model image and 0creates an image file.<br />3. Ghost Multicast Server transmits an existing image file simultaneously to all listening Ghost machines 1. Model system an image file is saved onto the Multicast Server machine.<br />4. Cloned systems simultaneously updated using an image file sent by the Ghost Multicast Server you can load multiple machines at once, dramatically reducing installation time and network traffic. <br /><br />11.0 Unit Summary<br />In this session we have learnt:<br />1. TCP/IP Model<br />2. TCP/IP applications<br />3. OSI Model<br />4. Domains and Work Groups<br />5. VPN and VM Ware<br />6. General Hardware Concepts<br />11.1 Exercise<br />Answer the following in short<br />1. Define the purpose of <br /> Gateways<br /> Routers<br /> Bridges<br /> DNS<br /> VM WorkStation.<br /><br />2. Explain the IP addressing process?<br />3. Explain the components of VPN?<br />4. Explain the “Domains”?Rajesh Babu Rajamanickamhttp://www.blogger.com/profile/16036650026261051481noreply@blogger.com0tag:blogger.com,1999:blog-8180223062941193498.post-9169856810626347202008-01-14T16:42:00.000-08:002008-01-14T16:43:23.243-08:00UNIX Tutorial for beginners2<span style="font-weight:bold;">Unix<br />Bourne Shell Programming</span><br /><br />Table of Contents <br />1 Very simple Scripts ....................................................................................................4<br />1.1 Traditional hello world script .................................................... 4<br />1.2 A very simple backup script ..................................................... 4<br />2 Redirections ..............................................................................4<br />2.1 Sample: stdout 2 file .................................................................... 5<br />2.2 Sample: stderr 2 file ........................................................ 5<br />2.3 Sample: stdout 2 stderr ................................................................ 5<br />2.4 Sample: stderr 2 stdout ............................................................. 5<br />2.5 Sample: stderr and stdout 2 file ............................................ 6<br />3 Pipes ...............................................................................6<br />3.1 What they are and why you'll want to use them ......................... 6<br />3.2 Sample: simple pipe with sed ............................................... 6<br />3.3 Sample: an alternative to ls -l *.txt ....................................... 6<br />4 Variables........................................................................7<br />4.1 Sample: Hello World! using variables ................................... 7<br />4.2 Sample: A very simple backup script (little bit better) ........................... 7<br />4.3 Local variables ................................................................ 7<br />5 Conditional Statments ............................................................8<br />5.1 Dry Theory ................................................................. 8<br />5.2 Sample: Basic conditional example if .. then............................... 8<br />5.3 Sample: Basic conditional example if .. then ... else............................. 9<br />5.4 Sample: Conditionals with variables ......................................... 9<br />6 Loops for, while and until .......................................9<br />6.1 For sample ............................................................... 9<br />6.2 C-like for .................................................................................. 10<br />6.3 While sample .............................................................................. 10<br />6.4 Until sample ......................................................................... 10<br />7 Functions ...............................................................................10<br />7.1 Functions sample ..................................................................... 11<br />7.2 Functions with parameters sample ..................................................................... 11<br />8 User interfaces......................................................................12<br />8.1 Using select to make simple menus ................................................................... 12<br />8.2 Using the command line ............................................................... 12<br />9 Miscellaneous .............................................................................13<br />9.1 Reading user input with read ............................................................ 13<br />9.2 Arithmetic evaluation ................................................................... 13<br />9.3 Finding bash ............................................................................. 14<br />9.4 Getting the return value of a program .................................................. 14<br />10 Tables...................................................15<br />10.1 String comparison operators ................................................... 15<br />10.2 String comparison examples ................................................................ 15<br />10.3 Arithmetic operators ..................................... 16<br />10.4 Arithmetic relational operators ....................................................... 16<br />11 More Scripts .........................................................................16<br />11.1 Applying a command to all files in a directory. Error! Bookmark not defined. 11.2 Sample: A very simple backup script (little bit better) ............................. 16<br />11.3 File re-namer ..................................................................... 16<br />11.4 File renamer (simple) ........................................................... 18<br />12 When something goes wrong (debugging) ............................19<br />12.1 Ways Calling BASH ........................................................................ 19<br /><br /><br />1 Very simple Scripts <br /><br />This tutorial will try to give you some hints about shell script programming strongly based on examples. <br />In this section you'll find some little scripts which will hopefully help you to understand some techniques. <br />1.1 Traditional hello world script <br />#!/bin/bash <br />echo Hello World <br />This script has only two lines. The first indicates the system which program to use to run the file. <br />The second line is the only action performed by this script, which prints 'Hello World' on the terminal. <br />If you get something like ./hello.sh: Command not found. Probably the first line '#!/bin/bash' is wrong, issue where is bash or see 'finding bash' to see how should you write this line. <br /><br />1.2 A very simple backup script <br />#!/bin/bash <br /> tar -cZf /var/my-backup.tgz /home/me/<br /><br />In this script, instead of printing a message on the terminal, we create a tar-ball of a user's home directory. This is NOT intended to be used, a more useful backup script is presented later in this document. <br /><br />2 Redirections <br />There are 3 file descriptors, stdin, stdout and stderr (std=standard). <br />Basically you can: <br />1. 1. redirect stdout to a file <br />2. 2. redirect stderr to a file <br />3. 3. redirect stdout to a stderr <br />4. 4. redirect stderr to a stdout <br />5. 5. redirect stderr and stdout to a file <br />6. 6. redirect stderr and stdout to stdout <br />7. 7. redirect stderr and stdout to stderr <br /><br /><br />1 'represents' stdout and 2 stderr. <br />A little note for seeing these things: with the less command you can view both stdout (which will remain on the buffer) and the stderr that will be printed on the screen, but erased as you try to 'browse' the buffer. <br />2.1 Sample: stdout 2 file <br />This will cause the output of a program to be written to a file. <br /> ls -l > ls-l.txt <br />Here, a file called 'ls-l.txt' will be created and it will contain what you would see on the screen if you type the command 'ls -l' and execute it. <br /><br />2.2 Sample: stderr 2 file <br />This will cause the stderr output of a program to be written to a file. <br /> grep da * 2> grep-errors.txt <br />Here, a file called 'grep-errors.txt' will be created and it will contain what you would see the stderr portion of the output of the 'grep da *' command. <br /><br />2.3 Sample: stdout 2 stderr <br />This will cause the stderr output of a program to be written to the same file descriptor than stdout. <br />grep da * 1>&2 <br />Here, the stdout portion of the command is sent to stderr, you may notice that in different ways. <br /><br />2.4 Sample: stderr 2 stdout <br />This will cause the stderr output of a program to be written to the same file descriptor than stdout. <br />grep * 2>&1 <br /><br /><br />Here, the stderr portion of the command is sent to stdout, if you pipe to less, you'll see that lines that normally 'disappear' (as they are written to stderr) are being kept now (because they're on stdout). <br /><br />2.5 Sample: stderr and stdout 2 file <br />This will place every output of a program to a file. This is suitable sometimes for cron entries, if you want a command to pass in absolute silence. <br /> rm -f $(find / -name core) &> /dev/null <br />This (thinking on the cron entry) will delete every file called 'core' in any directory. Notice that you should be pretty sure of what a command is doing if you are going to wipe its output. <br /> <br /><br />3 Pipes <br /><br />This section explains in a very simple and practical way how to use pipes, and why you may want it. <br />3.1 What they are and why you'll want to use them <br />Pipes let you use (very simple, I insist) the output of a program as the input of another one <br /><br />3.2 Sample: simple pipe with sed <br />This is very simple way to use pipes. <br /> ls -l | sed -e "s/[aeio]/u/g" <br />Here, the following happens: first the command ls -l is executed, and it's output, instead of being printed, is sent (piped) to the sed program, which in turn, prints what it has to. <br /><br />3.3 Sample: an alternative to ls -l *.txt <br />Probably, this is a more difficult way to do ls -l *.txt, but it is here for illustrating pipes, not for solving such listing dilemma. <br /> ls -l | grep "\.txt$" <br />Here, the output of the program ls -l is sent to the grep program, which, in turn, will print lines which match the regex "\.txt$". <br /> <br /> <br /><br />4 Variables <br /><br />You can use variables as in any programming languages. There are no data types. A variable in bash can contain a number, a character, a string of characters. <br />You have no need to declare a variable, just assigning a value to its reference will create it. <br />4.1 Sample: Hello World! using variables <br />#!/bin/bash <br /> STR="Hello World!"<br /> echo $STR <br />Line 2 creates a variable called STR and assigns the string "Hello World!" to it. Then the VALUE of this variable is retrieved by putting the '$' in at the beginning. Please notice (try it!) that if you don't use the '$' sign, the output of the program will be different, and probably not what you want it to be. <br /><br />4.2 Sample: A very simple backup script (little bit better) <br />#!/bin/bash <br /> OF=/var/my-backup-$(date +%Y%m%d).tgz<br /> tar -cZf $OF /home/me/<br /><br />This script introduces another thing. First of all, you should be familiarized with the variable creation and assignation on line 2. Notice the expression '$(date +%Y%m%d)'. If you run the script you'll notice that it runs the command inside the parenthesis, capturing its output. <br />Notice that in this script, the output filename will be different every day, due to the format switch to the date command(+%Y%m%d). You can change this by specifying a different format. <br />Some more examples: <br />echo ls <br />echo $(ls) <br /><br />4.3 Local variables <br />Local variables can be created by using the keyword local. <br />#!/bin/bash <br /><br /> HELLO=Hello <br />function hello { local HELLO=World echo $HELLO<br /> }<br /> echo $HELLO<br /> hello<br /> echo $HELLO<br /><br />This example should be enough to show how to use a local variable. <br /> <br /><br />5 Conditional Statements <br />Conditionals let you decide whether to perform an action or not, this decision is taken by evaluating an expression. <br />5.1 Dry Theory <br />Conditionals have many forms. The most basic form is: if expression then statement where 'statement' is only executed if 'expression' evaluates to true. '2<1' is an expression that evaluates to false, while '2>1' evaluates to true.xs <br />Conditionals have other forms such as: if expression then statement1 else statement2. Here 'statement1' is executed if 'expression' is true, otherwise 'statement2' is executed. <br />Yet another form of conditionals is: if expression1 then statement1 else if expression2 then statement2 else statement3. In this form there's added only the "ELSE IF 'expression2' THEN 'statement2'" which makes statement2 being executed if expression2 evaluates to true. The rest is as you may imagine (see previous forms). <br />A word about syntax: <br />The base for the 'if' constructions in bash is this: <br />if [expression]; <br />then <br />code if 'expression' is true. <br />fi <br /><br />5.2 Sample: Basic conditional example if .. then <br /> #!/bin/bash<br /> if [ "foo" = "foo" ]; then<br /><br /><br /> echo expression evaluated as true<br /> fi <br />The code to be executed if the expression within braces is true can be found after the 'then' word and before 'fi' which indicates the end of the conditionally executed code. <br /><br />5.3 Sample: Basic conditional example if .. then ... else <br />#!/bin/bash if [ "foo" = "foo" ]; then echo expression evaluated as true else echo expression evaluated as false fi <br /><br />5.4 Sample: Conditionals with variables <br />#!/bin/bash<br /> T1="foo"<br /> T2="bar"<br /> if [ "$T1" = "$T2" ]; then<br /><br /> echo expression evaluated as true else echo expression evaluated as false fi <br /> <br /><br />6 Loops for, while and until<br /><br />In this section you'll find for, while and until loops. <br />The "for" loop is a little bit different from other programming languages. Basically, it let's you iterate over a series of 'words' within a string. <br />The while executes a piece of code if the control expression is true, and only stops when it is false (or an explicit break is found within the executed code. <br />The "until" loop is almost equal to the while loop, except that the code is executed while the control expression evaluates to false. <br />If you suspect that while and until are very similar you are right. <br />6.1 For sample <br />#!/bin/bash<br /> for i in $( ls ); do<br /> echo item: $i<br /> done<br /><br /><br />On the second line, we declare i to be the variable that will take the different values contained in $( ls ). <br />The third line could be longer if needed, or there could be more lines before the done (4). <br />'done' (4) indicates that the code that used the value of $i has finished and $i can take a new value. <br />This script has very little sense, but a more useful way to use the for loop would be to use it to match only certain files on the previous example <br /><br />6.2 C-like for <br />Fresh suggested adding this form of looping. It's a for loop more similar to C/perl... for. <br />#!/bin/bash<br /> for i in `seq 1 10`;<br /> do<br /><br /> echo $i<br /> done <br /> <br /><br />6.3 While sample <br />#!/bin/bash <br />COUNTER=0<br /> while [ $COUNTER -lt 10 ]; do<br /><br /> echo The counter is $COUNTER<br /> let COUNTER=COUNTER+1 <br />done<br /><br />This script 'emulates' the well known (C, Pascal, perl, etc) 'for' structure <br /><br />6.4 Until sample <br />#!/bin/bash <br />COUNTER=20<br /> until [ $COUNTER -lt 10 ]; do<br /><br /> echo COUNTER $COUNTER<br /> let COUNTER-=1<br /> done<br /> <br /> <br /><br />8 User interfaces<br /><br />8.1 Using select to make simple menus <br />#!/bin/bash<br /> OPTIONS="Hello Quit"<br /> select opt in $OPTIONS; do<br /> if [ "$opt" = "Quit" ]; then<br /> echo done<br /> exit elif [ "$opt" = "Hello" ]; then<br /> echo Hello World<br /> else<br /><br /> clear<br /> echo bad option<br /> fi<br /> done<br /><br />If you run this script you'll see that it is a programmer's dream for text based menus. You'll probably notice that it's very similar to the 'for' construction, only rather than looping for each 'word' in $OPTIONS, it prompts the user. <br /><br />8.2 Using the command line <br />#!/bin/bash <br /> if [ -z "$1" ]; then <br />echo usage: $0 directory<br /> exit<br /> fi<br /> SRCD=$1<br /> TGTD="/var/backups/"<br /> OF=home-$(date +%Y%m%d).tgz<br /> tar -cZf $TGTD$OF $SRCD<br /><br />What this script does should be clear to you. The expression in the first conditional tests if the program has received an argument ($1) and quits if it didn't, showing the user a little usage message. The rest of the script should be clear at this point. <br /> <br /> <br /><br />9 Miscellaneous <br /><br />9.1 Reading user input with read <br />In many occasions you may want to prompt the user for some input, and there are several ways to achieve this. This is one of those ways: <br /> #!/bin/bash echo Please, enter your name read NAME echo "Hi $NAME!" <br />As a variant, you can get multiple values with read, this example may clarify this. <br />#!/bin/bash echo Please, enter your first name and last name read FN LN echo "Hi! $LN, $FN !" <br /><br />9.2 Arithmetic evaluation <br />On the command line (or a shell) try this: <br />echo 1 + 1 <br />If you expected to see '2' you'll be disappointed. What if you want BASH to evaluate <br />some numbers you have? The solution is this: <br />echo $((1+1)) <br />This will produce a more 'logical' output. This is to evaluate an arithmetic expression. <br />You can achieve this also like this: <br />echo $[1+1] <br />If you need to use fractions, or more maths or you just want it, you can use bc to evaluate <br />arithmetic expressions. <br />if i ran "echo $[3/4]" at the command prompt, it would return 0 because bash only uses integers when answering. If you ran "echo 3/4|bc -l", it would properly return 0.75. <br />9.3 Finding bash <br />From a message from mike (see Thanks to) you always use #!/bin/bash .. you might was to give an example of How to find where bash is located. 'locate bash' is preferred, but not all machines have locate. 'find ./ -name bash' from the root dir will work, usually. Suggested locations to check: ls -l /bin/bash ls -l /sbin/bash ls -l /usr/local/bin/bash ls -l /usr/bin/bash ls -l /usr/sbin/bash ls -l /usr/local/sbin/bash (can't think of any other dirs offhand... I’ve found it in most of these places before on different system). You may try also 'which bash'. <br /><br />9.4 Getting the return value of a program <br />In bash, the return value of a program is stored in a special variable called $?. <br />This illustrates how to capture the return value of a program, I assume that the directory dada does not exist. (This was also suggested by mike) <br />#!/bin/bash<br /> cd /dada &> /dev/null<br /> echo rv: $?<br /> cd $(pwd) &> /dev/null<br /> echo rv: $?<br /><br /><br />10 Tables <br />10.1 String comparison operators <br /> .(1) s1 = s2 <br /> .(2) s1 != s2 <br /> .(3) s1 < s2 <br /> .(4) s1 > s2 <br /> .(5) -n s1 <br /> .(6) -z s1 <br /> .(1) s1 matches s2 <br /> .(2) s1 does not match s2 <br /> .(3) __TO-DO__ <br /> .(4) __TO-DO__ <br /> .(5) s1 is not null (contains one or more characters) <br /> .(6) s1 is null <br /><br />10.2 String comparison examples <br />Comparing two strings. <br />#!/bin/bash<br /> S1='string'<br /> S2='String'<br /> if [ $S1=$S2 ];<br /> then<br /><br /> echo "S1('$S1') is not equal to S2('$S2')" fi if [ $S1=$S1 ]; then<br /> echo "S1('$S1') is equal to S1('$S1')" fi <br />I quote here a note from a mail, sent buy Andreas Beck, referring to use if [ $1 = $2 ]. <br /><br />This is not quite a good idea, as if either $S1 or $S2 is empty, you will get a parse error. x$1=x$2 or "$1"="$2" is better. <br />10.3 Arithmetic operators <br />+ <br />-<br />* <br />/ <br />% (remainder) <br />10.4 Arithmetic relational operators <br />-lt (<) <br />-gt (>) <br />-le (<=) <br />-ge (>=) <br />-eq (==) <br />-ne (!=) <br />C programmers should simple map the operator to its corresponding parenthesis. <br /> <br /> <br /><br />11 More Scripts <br />11.1 Sample: A very simple backup script (little bit better) <br />#!/bin/bash <br />SRCD="/home/"<br /> TGTD="/var/backups/"<br /> OF=home-$(date +%Y%m%d).tgz<br /> tar -cZf $TGTD$OF $SRCD<br /> <br /> <br /><br />12 When something goes wrong (debugging) <br />12.1 Ways Calling BASH <br />A nice thing to do is to add on the first line <br /> #!/bin/bash -x <br />This will produce some interesting output information <br /> <br /><br />13 Reference <br />1. 1. http://www.tldp.org/LDP/abs/html/ <br />2. 2. http://www.freeos.com/guides/lsst/ <br />3. 3. http://quong.best.vwh.net/shellin20/ <br />4. 4. http://www.eng.hawaii.edu/Tutor/vi.html <br />5. 5. http://tutorials.beginners.co.uk/read/category/11/id/269Rajesh Babu Rajamanickamhttp://www.blogger.com/profile/16036650026261051481noreply@blogger.com0tag:blogger.com,1999:blog-8180223062941193498.post-61660253543227365152008-01-14T16:40:00.001-08:002008-01-14T16:40:47.552-08:00UNIX Tutorial for beginners<span style="font-weight:bold;">UNIX Tutorial for beginners</span> (Table of Contains) <br /> <br />1. What Is UNIX?.................................................................3 <br />1.1 Thekernel...................................................................3 <br />1.2 The standard UilityPgrams............................................3 <br />1.3 The system configuration fles........................................3 <br />2 Accessing a UNIX System...................................................4 <br />2.1 2.1 Console................................................................4 <br />2.2 2.2 Smart terminals....................................................4 <br />2.3 2.3 Remote access......................................................4 <br />3 Logging In and Logging Out................................................5 <br />3.1 Logging in..................................................................5 <br />3.2 Your username...........................................................5 <br />3.3 Your password...........................................................5 <br />3.4 Logging Out...............................................................5 <br />4 The UNIX Shell.................................................................6 <br />4.1 Concept.....................................................................6 <br />4.2 Entering shell commands.............................................7 <br />4.3 Aborting a shell command............................................7 <br />4.4 Special characters in UNIX,..........................................7 <br />4.5 Getting help on UNIX..................................................7 <br />5 Files and Directories.........................................................8 <br />5.1 The UNIX file system structure.....................................8 <br />5.2 File and directory permissions......................................9 <br />5.2.1 A short note on groups:........................................9 <br />5.2.2 The meaning of file and directory permissions..........9 <br />5.2.3 Viewing permissions.............................................9 <br />5.2.4 Setting permissions.............................................10 <br />5.2.5 Changing Directories............................................10 <br />5.2.6 Listing the contents of a directory..........................11 <br />5.2.7 Viewing the contents of a file................................11<br />5.2.8 Copying files and directories.................................12 <br />5.2.9 Moving and renaming files....................................12 <br />5.2.10 Removing files..................................................13 <br />5.2.11 Creating a directory...........................................13 <br />5.2.12 Removing a directory.........................................13 <br />6 Redirecting Input and Output............................................14 <br />6.1 Redirecting input.......................................................14 <br />6.2 Redirecting output.....................................................14 <br />6.3 Redirecting error.......................................................15 <br />7 Pipelines and Filters.........................................................15 <br />7.1 Concept:..................................................................15 <br />7.2 Grep........................................................................16 <br />8 Examples of Basic Commands...........................................16 <br />Summary of Basic Commands.......................................17 <br />9 The vi Editor...................................................................19 <br />9.1 Basics of vi editor:.....................................................19 <br />9.2 Basic vi keys.............................................................19 <br />10 Networking asics............................................................20 <br />10.1 ifconfig...................................................................20 <br />10.2 ping.......................................................................20 <br />10.3 Netstat...................................................................21 <br />11 Reference......................................................................21 <br /> <br />1. What Is UNIX? <br /><br />UNIX is an operating system. The job of an operating system is to orchestrate the various parts of the computer -- the processor, the on-board memory, the disk drives, keyboards, video monitors, etc. -- to perform useful tasks. The operating system is the master controller of the computer, the glue that holds together all the components of the system, including the administrators, programmers, and users. When you want the computer to do something for you, like start a program, copy a file, or display the contents of a directory, it is the operating system that must perform those tasks for you. <br />More than anything else, the operating system gives the computer its recognizable characteristics. It would be difficult to distinguish between two completely different computers, if they were running the same operating system. Conversely, two identical computers, running different operating systems, would appear completely different to the user. <br />UNIX was created in the late 1960s, in an effort to provide a multi-user, multitasking system for use by programmers. The philosophy behind the design of UNIX was to provide simple, yet powerful utilities that could be pieced together in a flexible manner to perform a wide variety of tasks. The UNIX operating system comprises three parts: The kernel, the standard utility programs, and the system configuration files. <br />1.1 The kernel <br /><br />The kernel is the core of the UNIX operating system. Basically, the kernel is a large program that is loaded into memory when the machine is turned on, and it controls the allocation of hardware resources from that point forward. The kernel knows what hardware resources are available (like the processor(s), the on-board memory, the disk drives, network interfaces, etc.), and it has the necessary programs to talk to all the devices connected to it. <br />1.2 The standard utility programs <br /><br />These programs include simple utilities like cp, which copies files, and complex utilities, like the shell that allows you to issue commands to the operating system. <br />1.3 The system configuration files <br /><br />The system configuration files are read by the kernel, and some of the standard utilities. The UNIX kernel and the utilities are flexible programs, and certain aspects of their behavior can be controlled by changing the standard configuration files. One example of a system configuration file is the filesystem table "fstab”, which tells the kernel where to find all the files on the disk drives. Another example is the system log configuration file "syslog.conf", which tells the kernel how to record the various kinds of events and errors it may encounter. <br />2 Accessing a UNIX System <br /><br />A UNIX system can be accessed through the following different methods. <br />2.1 2.1 Console <br /><br />Every UNIX system has a main console that is connected directly to the machine. The console is a special type of terminal that is recognized when the system is started. Some UNIX system operations must be performed at the console. Typically, the console is only accessible by the system operators, and administrators. <br />2.2 2.2 Smart terminals <br /><br />Smart terminals, like the X terminal, can interact with the UNIX system at a higher level. Smart terminals have enough on-board memory and processing power to support graphical interfaces. The interaction between a smart terminal and a UNIX system can go beyond simple characters to include icons, windows, menus, and mouse actions. <br />2.3 2.3 Remote access <br /><br />A UNIX system can also be accessed using telnet or ssh from any UNIX/Non-UNIX platforms. <br /> <br />$ telnet <UNIX-IP address or Host Name> <br />$ ssh <UNIX-IP address or Host Name> <br />3 Logging In and Logging Out <br /><br />To ensure security and organization on a system with many users, UNIX machines employ a system of user accounts. The user accounting features of UNIX provide a basis for analysis and control of system resources, preventing any user from taking up more than his or her share, and preventing unauthorized people from accessing the system. Every user of a UNIX system must get permission by some access control mechanism. <br />3.1 Logging in <br /><br />Logging in to a UNIX system requires two pieces of information: A username, and a password. When you sit down for a UNIX session, you are given a login prompt that looks like this: <br />login: <br />Type your username at the login prompt, and press the return key. The system will then ask you for your password. When you type your password, the screen will not display what you type. <br />3.2 Your username <br /><br />Your username is assigned by the person who creates your account. The UNIX system administrator is generally responsible for this. Your username must be unique on the system where your account exists since it is the means by which you are identified on the system. <br />3.3 Your password <br /><br />When your account is created, a password is assigned. The first thing you should do is change your password, using the passwd utility. To change your password, type the command <br />$ passwd <br />After you have logged in. The system will ask for your old password, to prevent someone else from sneaking up, and changing your password. Then it will ask for your new password. You will be asked to confirm your new password, to make sure that you didn't mistype. It is very important that you choose a good password, so that someone else cannot guess it. Here are some rules for selecting a good password. <br />3.4 Logging Out <br /><br />When you're ready to quit, type the command <br />$ exit <br />It is always a good idea to clear the display before you log out, you can type the command <br />$ clear <br />4 The UNIX Shell <br /><br />The shell is perhaps the most important program on the UNIX system, from the end-user's standpoint. The shell is your interface with the UNIX system, the middleman between you and the kernel. <br />4.1 Concept <br /><br /> The shell is a type of program called an interpreter. An interpreter operates in a simple loop: It accepts a command, interprets the command, executes the command, and then waits for another command. The shell displays a "prompt," to notify you that it is ready to accept your command. <br /> <br />The shell recognizes a limited set of commands, and you must give commands to the shell in a way that it understands: Each shell command consists of a command name, followed by command options (if any are desired) and command arguments (if any are desired). The command name, options, and arguments, are separated by blank space. <br />The shell is a program that the UNIX kernel runs for you. A program is referred to as a process while the kernel is running it. The kernel can run the same shell program (or any other program) simultaneously for many users on a UNIX system, and each running copy of the program is a separate process. <br />When you execute a non built-in shell command, the shell asks the kernel to create a new sub process (called a "child" process) to perform the command. The child process exists just long enough to execute the command. The shell waits until the child process finishes before it will accept the next command. <br />Unlike DOS, the UNIX shell is case-sensitive, meaning that an uppercase letter is not equivalent to the same lower case letter (i.e., "A" is not equal to "a"). Most all UNIX commands are lower case. <br />4.2 Entering shell commands <br /><br />The basic form of a UNIX command is: commandname [-options] [arguments] <br />The command name is the name of the program you want the shell to execute. The command options, usually indicated by a dash, allow you to alter the behavior of the command. The arguments are the names of files, directories, or programs that the command needs to access. <br />The square brackets ([ and ]) signify optional parts of the command that may be omitted. <br />Example: Type the command <br />$ ls -l /tmp <br />The above command displays a long listing of the contents of the /tmp directory. In this example, "ls" is the command name, "-l" is an option that tells ls to create a long, detailed output, and "/tmp" is an argument naming the directory that ls is to list. <br />4.3 Aborting a shell command <br /><br />Most UNIX systems will allow you to abort the current command by typing Control-C. To issue a Control-C abort, hold the control key down, and press the "c" key. <br />4.4 Special characters in UNIX, <br /><br />UNIX recognizes certain special characters as command directives. If you use one of the UNIX special characters in a command, make sure you understand what it does. The special characters are: / < > ! $ % ^ & * | { } ~ and ; <br />When creating files and directories on UNIX, is is safest to only use the characters A-Z, a-z, 0-9, and the period, dash, and underscore characters. <br />4.5 Getting help on UNIX <br /><br />To access the on-line manuals, use the man command, followed by the name of the command you need help with. <br />Example: Type <br />$ man ls <br />to see the manual page for the "ls" command. To get help on using the manual, type <br /> <br />$ man man <br />to the UNIX shell. <br />5 Files and Directories <br /><br /> <br />All the stored information on a UNIX computer is kept in a filesystem. Any time you interact with the UNIX shell, the shell considers you to be located somewhere within a filesystem. Although it may seem strange to be "located" somewhere in a computer's filesystem, the concept is not so different from real life. After all, you can't just be, you have to be somewhere. The place in the filesystem tree where you are located is called the current working directory. <br />5.1 The UNIX file system structure <br /><br /> The UNIX filesystem is hierarchical (resembling a tree structure). The tree is anchored at a place called the root, designated by a slash "/". Every item in the UNIX filesystem tree is either a file, or a directory. A directory is like a file folder. A directory can contain files, and other directories. A directory contained within another is called the child of the other. A directory in the filesystem tree may have many children, but it can only have one parent. A file can hold information, but cannot contain other files, or directories. <br /> <br />To describe a specific location in the filesystem hierarchy, you must specify a "path." The path to a location can be defined as an absolute path from the root anchor point, or as a relative path, starting from the current location. When specifying a path, you simply trace a route through the filesystem tree, listing the sequence of directories you pass through as you go from one point to another. Each directory listed in the sequence is separated by a slash. <br />UNIX provides the shorthand notation of "." to refer to the current location, and ".." to refer to the parent directory. <br />5.2 File and directory permissions <br /><br />UNIX supports access control. Every file and directory has associated with it ownership, and access permissions. Furthermore, one is able to specify those to whom the permissions apply. Permissions are defined as read, write, and execute. The read, write, and execute permissions are referred to as r, w, and x, respectively. <br />Those to whom the permissions apply are the user who owns the file, those who are in the same group as the owner, and all others. The user, group, and other permissions are referred to as u, g, and o, respectively. <br />5.2.1 A short note on groups: <br /><br />UNIX allows users to be placed in groups, so that the control of access is made simpler for administrators. <br />5.2.2 The meaning of file and directory permissions <br /><br />• Read permission <br /><br />For a file, having read permission allows you to view the contents of the file. For a directory, having read permission allows you to list the directory's contents. <br />• Write permission <br /><br />For a file, write permission allows you to modify the contents of the file. For a directory, write permission allows you to alter the contents of the directory, i.e., to add or delete files. <br />• Execute permission <br /><br />For a file, execute permission allows you to run the file, if it is an executable program, or script. Note that file execute permission is irrelevant for no executable files. For a directory, execute permission allows you to cd to the directory, and make it your current working directory. <br />5.2.3 Viewing permissions <br /><br />To see the permissions on a file, use the ls command, with the -l option. <br />Example: Execute the command <br />$ ls -l /etc/passwd <br />to view the information on the system password database. The output should look similar to this: <br />-rw-r--r-- 1 root sys 41002 Apr 17 12:05 /etc/passwd <br />The first 10 characters describe the access permissions. The first dash indicates the type of file (d for directory, s for special file, - for a regular file). The next three characters ("rw-") describe the permissions of the owner of the file: read and write, but no execute. The next three characters ("r--") describe the permissions for those in the same group as the owner: read, no write, no execute. The next three characters describe the permissions for all others: read, no write, no execute. <br />5.2.4 Setting permissions <br /><br />UNIX allows you to set the permissions on files that you own. The command to change the file permission mode is chmod, chmod requires you to specify the new permissions you want, and specify the file or directory you want the changes applied to. <br />To set file permissions, you may use to the "rwx" notation to specify the type of permissions, and the "ugo" notation to specify those the permissions apply to. <br />To define the kind of change you want to make to the permissions, use the plus sign (+) to add a permission, the minus sign (-) to remove permission, and the equal sign (=) to set permission directly. <br />Example: Type the command <br />$ chmod g=rw- ~/.bash_history <br />to change the file permissions on the file . bash_history, in your home directory. Specifically, you are specifying group read access and write access, with no execute access. <br />5.2.5 Changing Directories <br /><br />In UNIX, your location in the filesystem hierarchy is known as your "current working directory." When you log in, you are automatically placed in your "home directory." To see where you are, type the command <br /> <br />$ pwd <br />which stands for "print working directory." <br />To change your location in the filesystem hierarchy, use the cd (change directory) command, followed by an argument defining where you want to go. The argument can be either an absolute path to the destination, or a relative path. <br />Example: Type the command <br />$ cd /tmp <br />to go to the /tmp directory. You can type <br />$ pwd <br />to confirm that you're actually there. If you type the cd command without an argument, the shell will place you in your home directory. <br />5.2.6 Listing the contents of a directory <br /><br />The ls command allows you to see the contents of a directory, and to view basic information (like size, ownership, and access permissions) about files and directories. The ls command has numerous options, so see the manual page on ls (type man ls) for a complete listing. The ls command also accepts one or more arguments. The arguments can be directories, or files. <br />Example: Type the command <br />$ ls -la /etc/i* <br />to the UNIX shell. For more details on ls command please refer to ls man pages. <br /> <br />5.2.7 Viewing the contents of a file <br /><br />Text files are intended for direct viewing, and other files are intended for computer interpretation. The UNIX file command allows you to determine whether an unknown file is in text format, suitable for direct viewing. <br />Exmple: Type the command <br />$ file /bin/sh <br />to see what kind of file the shell is. <br />• The cat command <br /><br />The cat command concatenates files and sends them to the screen. You can specify one or more files as arguments. Cat makes no attempt to format the text in any way, and long output may scroll off the screen before you can read it. <br />Example: Send the contents of your .profile file to the screen by typing <br />$ cat ~/.profile <br />to the shell. The tilde character (~) is UNIX shorthand for your home directory. <br />• The more command <br /><br />The more command displays a text file, one screen full at a time. You can scroll forward a line at a time by pressing the return key, or a screen full at a time by pressing the spacebar. You can quit at any time by pressing the q key. <br />Example: Type <br />$ more /etc/rc.d/rc.sysinit <br />to the shell. Scroll down by pressing return, and by pressing the spacebar. Stop the more command from displaying the rest of the file by typing q. <br />• The head and tail commands <br /><br />The head command allows you to see the top part of a file. You may specify the number of lines you want, or default to ten lines. <br />Example: Type <br />$ head -15 /etc/rc.d/rc.sysinit <br />to see the first fifteen lines of the /etc/rc.d/rc.sysinit file. <br />The tail command works like head, except that it shows the last lines of of file. <br />EXAMPLE: Type <br />$ tail etc/rc.d/rc.sysinit <br />to see the last ten lines of the file /etc/rc. Because we did not specify the number of lines as an option, the tail command defaulted to ten lines. <br />5.2.8 Copying files and directories <br /><br />The UNIX command to copy a file or directory is cp. The basic cp command syntax is <br />$ cp source destination. <br />Example: The command <br />$ cp ~/.bash_profile ~/pcopy <br />makes a copy of your .bash_profile file, and stores it in a file called "pcopy" in your home directory. <br />EXERCISE: Describe the permissions necessary to successfully execute the command in the previous example. <br />5.2.9 Moving and renaming files <br /><br />The UNIX mv command moves files and directories. You can move a file to a different location in the filesystem, or change the name by moving the file within the current location. <br />Example: The command <br />$ mv ~/pcopy ~/qcopy <br />takes the pcopy file you created in the cp exercise, and renames it "qcopy". <br />5.2.10 Removing files <br /><br />The rm command is used for removing files and directories. The syntax of the rm command is rm filename. You may include many filenames on the command line. <br />Example: Remove the qcopy file that you placed in your home directory in the section on moving files by typing <br />$ rm ~/qcopy <br />5.2.11 Creating a directory <br /><br />The UNIX mkdir command is used to make directories. The basic syntax is mkdir directoryname. If you do not specify the place where you want the directory created (by giving a path as part of the directory name), the shell assumes that you want the new directory placed within the current working directory. <br />Example: Create a directory called foo within your home directory by typing <br />$ mkdir ~/foo <br />5.2.12 Removing a directory <br /><br />The UNIX rmdir command removes a directory from the filesystem tree. The rmdir command does not work unless the directory to be removed is completely empty. The rm command, used with the -r option can also be used to remove directories. The rm -r command will first remove the contents of the directory, and then remove the directory itself. <br />Example: You could enter the commands <br />$ rmdir ~/foo/bar; rmdir ~/foo <br />$ rm -r ~/foo <br /> <br /> <br />6 Redirecting Input and Output <br /><br />Every program you run from the shell opens three files: Standard input, standard output, and standard error. The files provide the primary means of communications between the programs, and exist for as long as the process runs. The standard input file provides a way to send data to a process. As a default, the standard input is read from the terminal keyboard. The standard output provides a means for the program to output data. As a default, the standard output goes to the terminal display screen. The standard error is where the program reports any errors encountered during execution. By default, the standard error goes to the terminal display. <br />6.1 Redirecting input <br /><br />Using the "less-than" sign with a file name like this: <br />< file1 <br />in a shell command instructs the shell to read input from a file called "file1" instead of from the keyboard. <br />Example: Use standard input redirection to send the contents of the file /etc/passwd to the more command: <br />$ more < /etc/passwd <br />Many UNIX commands that will accept a file name as a command line argument, will also accept input from standard input if no file is given on the command line. <br />Example: To see the first ten lines of the /etc/passwd file, the command: <br />$ head /etc/passwd <br />will work just the same as the command: <br />$ head < /etc/passwd <br />6.2 Redirecting output <br /><br />Using the "greater-than" sign with a file name like this: <br />> file2 <br />causes the shell to place the output from the command in a file called "file2" instead of on the screen. If the file "file2" already exists, the old version will be overwritten. <br />Example: Type the command <br />$ ls /tmp > ~/ls.out <br />to redirect the output of the ls command into a file called "ls.out" in your home directory. Remember that the tilde (~) is UNIX shorthand for your home directory. In this command, the ls command will list the contents of the /tmp directory. <br />Use two "greater-than" signs to append to an existing file. For example: <br />>> file2 <br />causes the shell to append the output from a command to the end of a file called "file2". If the file "file2" does not already exist, it will be created. <br />Example: In this example, I list the contents of the /tmp directory, and put it in a file called myls. Then, I list the contents of the /etc directory, and append it to the file myls: <br />ls /tmp > myls ls /etc >> myls <br />6.3 Redirecting error <br /><br />Redirecting standard error is a bit trickier, depending on the kind of shell you're using (there's more than one flavor of shell program!). In the POSIX shell and ksh, redirect the standard error with the symbol "2>". <br />Example: Sort the /etc/passwd file, place the results in a file called foo, and trap any errors in a file called err with the command: <br />$ sort < /etc/passwd > foo 2> err <br /> <br />7 Pipelines and Filters <br /> 7.1 Concept: <br /><br /> <br />UNIX allows you to connect processes, by letting the standard output of one process feed into the standard input of another process. That mechanism is called a pipe. Connecting simple processes in a pipeline allows you to perform complex tasks without writing complex programs. <br />Example: Using the more command, and a pipe, send the contents of your .profile and .shrc files to the screen by typing <br />$ cat .profile .shrc | more <br />to the shell. The following command uses head and tail in a pipeline to display lines 25 through 75 of a file. <br />$ cat file | head -75 | tail -50 <br />7.2 Grep <br /><br />The grep utility is one of the most useful filters in UNIX. Grep searches line-by-line for a specified pattern, and outputs any line that matches the pattern. The basic syntax for the grep command is grep [-options] pattern [file]. If the file argument is omitted, grep will read from standard input. It is always best to enclose the pattern within single quotes, to prevent the shell from misinterpreting the command. <br />The grep utility recognizes a variety of patterns, and the pattern specification syntax was taken from the vi editor. Here are some of the characters you can use to build grep expressions: <br />• The carat (^) matches the beginning of a line. <br />• The dollar sign ($) matches the end of a line. <br />• The period (.) matches any single character. <br />• The asterisk (*) matches zero or more occurrences of the previous character. <br />• The expression [a-b] matches any characters that are lexically between a and b. <br /><br />Example: Type the command <br />$ grep 'jon' /etc/passwd <br />to search the /etc/passwd file for any lines containing the string "jon". <br />Type the command <br />$grep '^jon' /etc/passwd <br />to see the lines in /etc/passwd that begin with the character string "jon". <br /> <br />$ls -l /tmp | grep 'root' <br />List all the files in the /tmp directory owned by the user root. <br /> <br />8 Examples of Basic Commands <br /><br /> <br /> <br /> Action Command Examples <br />Append to file cat >> cat >> file1 <br />Combine 2 files cat cat file1 file2 > file3 <br />Copy files cp cp myfile copymyfile <br />Create a file cat cat > newfile <br />Edit files vi vi file <br />List files ls ls bin/ <br />Move a file mv mv file1 doc/chapter1 <br />Remove a file rm rm unwantedfile <br />Rename a file mv mv oldfilename newfilename<br />View files cat pg more less view cat file pg file2 file3 view file6 file7 <br /><br /> Directories Command Examples <br />Change to another directory cd cd example/first/<br />Create a directory mkdir mkdir example1<br />Find out where you are pwd pwd <br />Go to your home directory cd cd <br />Remove an empty directory rmdir rmdir junk <br /><br />Redirection of Output or Input > redirects the output of a command to a file >> redirects the output of a command to the end of an existing file < takes the input of a command form a file, not the terminal <br />Summary of Basic Commands <br />• apropos locate commands by keyword lookup <br />• arch display the architecture of the current host <br />• cal display a calendar cal [month] year <br /> o month number between 1 and 12 <br /> o year number between 1 and 9999 <br /><br />Examples: cal 1996 print calendar for year 1996 cal 1 1997 print calendar for January 1997 <br />• cancel send/cancel requests to an LP print service <br />• cat concatenate and display files (To view files, create files, append to files and combine files) cat [options] [files] Examples: cat files read file(s) cat > file create file (reads form terminal; terminate input with ^D) cat >> file append to file (reads form terminal; terminate input with ^D) cat file2 >> file1 appends contents of file2 to file1 <br />• cd shell built-in functions to change the current working directory <br />• chdir shell built-in functions to change the current working directory <br />• chgrp change the group ownership of a file <br />• chmod change the permissions mode of a file <br />• chown change owner of file <br />• clear clear the terminal screen <br />• cp copy files <br />• date print and set the date <br />• dc arbitrary precision desktop calculator <br />• dos2unix convert text file from DOS format to ISO format <br />• eject eject media such as CD-ROM and floppy from drive <br />• exit shell built-in functions to enable the execution of the shell to advance beyond its sequence of steps <br />• file determine the type of a file by examining its contents <br />• head display first few lines of files <br />• lp send/cancel requests to an LP print service <br />• lpstat print information about the status of the LP print service <br />• ls list the contents of a directory ls [options] [directories] the current working directory used if no directories specified A few options: <br /> o -a list all entries includeing hidden files (starting with .) <br /> o -i print inode numbers <br /> o -l long list (mode, links, owner, group, size, timeof last modification, and name <br /> o -t sort by modification time <br /> o -x multi-column list, sorted across each row <br />• Mail, mailx, mail, rmail interactive message processing system to read mail or send mail to users mail [options] users Examples: mail with no options, to read your mail mail user to send mail to user mail user < filename mail a file to another user <br />• mkdir make directories <br />• more browse or page through a text file <br />• mv move files <br />• nispasswd change NIS+ password information <br />• page browse or page through a text file <br />• pg files perusal filter for CRTs <br />• pr print files <br />• ps display the status of current processes <br />• pwd working directory name <br />• rm remove files or directories <br />• rmdir remove files or directories <br />• spell find spelling errors <br />• tail deliver the last part of a file <br />• umask shell built-in function to restrict read/write/execute permissions <br />• unix2dos convert text file from ISO format to DOS format <br />• vi screen-oriented (visual) display editor based on ex <br />• view screen-oriented (visual) display editor based on ex <br />• w who is logged in, and what are they doing <br />• wc display a count of lines, words and characters in a file <br />• which locate a command; display its pathname or alias <br />• who who is on the system <br />• whoami display the effective current username <br />• whois Internet user name directory service <br />• write write to another user <br /><br />9 The vi Editor <br /><br /> <br />Vi (visual) is a display oriented interactive text editor, probable the most widely used text editor in UNIX world. When using vi the screen of your terminal acts as a window into the file which you are editing. Changes, which you make to the file, are reflected in what you see. <br />9.1 Basics of vi editor: <br /><br />Vi operates in two modes 2 (insert and command) in order to determine which function should be performed when a key is pressed. To start vi just type vi at the command prompt. <br />$ vi file_name <br />You will see the text of the file you included. Vi is now in command mode. The most basic command to enter insert mode is pressing i, which lets you insert text to the left of the cursor. The escape key (<esc>) takes you out of insert mode and back to the command mode. If you are ever in doubt about what mode you are in, just press <esc> a few times until vi starts complaining. You will then know that you are in the command mode. <br />9.2 Basic vi keys <br /><br /> Delete a character: x <br /> Delete a line: dd <br /> Delete n lines: ndd (Where n is any number) <br /> Copy a line: yy <br /> Copy n lines: nyy (Where n is any number) <br /> Paste a line: p <br /> Beginning of file: :0 <br /> End of file: Shift + g <br /> Beginning of line: 0 <br /> End of a line: $ <br /> Change word: cw <br /> Repeat last executed task: . <br /> Find a word: /<word> <br /> Find again: n <br /> Save: w <br /> Save As: w <file name> <br /> Save and exit: wq <br /> Quit without save: q! <br /> <br />10 Networking Basics <br /><br /> <br />10.1 ifconfig <br /><br />The "ifconfig" command allows the operating system to setup network interfaces and allow the user to view information about the configured network interfaces. <br /> <br />To view the network settings on the first Ethernet adapter installed in the computer you can use the following command. <br />$ ifconfig eth0 <br />$ ifconfig <br />to display all network interface settings including the loopback interface. <br /> <br /># ifconfig eth0 down <br />brings the eth0 (1st Ethernet interface) down on your UNIX system. We have used ‘#’ as command prompt here, because only ‘root’ can do this. <br /> <br /># ifconfig eth0 up <br />Activates the network interface. <br /> <br />10.2 ping <br /><br />Ping stands for “Packet InterNet Groper". Ping sends ICMP ECHO_REQUEST packets to network hosts. <br /> <br />Example: <br /> <br />$ ping computerhope.com <br /> <br />Would ping the host computerhope.com to see if it is alive. <br />Note: Many ISP's are disabling the ping command in helping to prevent possible Denial Of Service attacks. In addition some commands may not be available or results may vary when pinging a host. <br /> <br />10.3 Netstat <br /><br />The netstart command in UNIX shows the network status. <br /> <br />Example: <br />$ netstat <br />displays generic net statistics of the host you are currently connected to. <br /> <br />$netstat –t <br />Shows active tcp connections on your UNIX systems. <br /> <br />11 Reference <br />1. www.linux.org <br />2. www.linux-docs.org <br />3. www.computerhope.com/unix.htm <br />4. www.xfree86.org <br />5. Unix/Linux man pagesRajesh Babu Rajamanickamhttp://www.blogger.com/profile/16036650026261051481noreply@blogger.com0tag:blogger.com,1999:blog-8180223062941193498.post-75924346140719545422008-01-14T16:37:00.000-08:002008-01-14T16:38:04.462-08:00Performance monitoring<span style="font-weight:bold;">Performance monitoring</span><br />Performance testing often concentrates on new or changed systems. These may form just one part of the IT infrastructure within an organization. The following questions can be answered only by taking a broader approach:<br /><br />Will end-to-end application performance always remain satisfactory ? Will all SLAs (service-level agreements) continue to be met ? What additional load will be imposed on the network ? When will additional hardware be needed ?<br /><br />Acutest provide performance monitoring services to address these performance issues before operational service begins.<br /><br />Acutest can recommend performance monitoring tools to suit the needs of your organization. These include:<br />• Mercury SiteScope - monitoring solution to ensure availability and performance of distributed IT infrastructures<br />• Compuware Vantage - performance monitoring including application, network and server monitoring<br />• IBM Tivoli Monitoring for Transaction Performance.<br />Additionally, we can provide expertise to model system performance and capacity based on performance testing work carried out before launch.<br />Performance tuning<br />Many organizations need to obtain optimum performance from their existing IT systems, as they are constrained from making hardware changes solely to improve performance.<br /><br />Tuning IT systems is not a simple exercise – there is usually a balance between performance improvements, throughput requirements and impact on your finite hardware resources.<br /><br />Acutest can provide performance tuning services to recommend how best to tune IT systems balancing all these factors. This can often be done as part of a planned load test – saving you time and money.<br /><br />We also recommend testing as early as possible in your development life-cycle to eliminate performance issues. The benefits from this approach are often achieved by appropriate use of profiling and tuning tools integrated into the development process. These are often used in conjunction with performance testing tools.<br /><br />Acutest can recommend performance tuning tools and services to suit the needs of your organization. These include:<br />• Mercury Tuning - Outsourced Performance Validation Service (formerly ProTune).<br />• Mercury Diagnostics - Drill-down from business processes into J2EE, .NET and ERP/CRM applications.<br />• Compuware DevPartner - Development, debugging and tuning for Java, .NET, web-enabled and distributed applications<br /><br />Performance problems in operational IT systems and software applications are costly both in terms of business disruption and remedial work. These issues tend to go undetected prior to launch because of the difficulty of conducting realistic performance testing.<br /><br />We have created our Performance Testing Service to address this problem before you launch. Through the use of proven, advanced techniques, a structured testing approach and appropriate performance testing tools, we will reduce the risks of performance failure for new or enhanced applications.<br /><br />Our performance testing service elements include:<br />• Load and stress testing<br />• Scalability and volume testing<br />• Endurance and soak testing<br />• Performance testing tool evaluation<br />• Performance monitoring and tuning<br />For a short explanation of these testing terms visit load testing and other types of non-functional testing<br /><br />Our services are suitable for performance testing:<br />• Software applications (including web-enabled applications and systems)<br />• IT systems (including embedded systems)<br />• Infrastructure (including the network)<br />• Multi-tiered solutions that include a combination of applications, IT systems and infrastructure.<br />Mercury SiteScope<br />• Do you need to ensure infrastructure availability and performance across a wide range of hardware, operating systems, applications, and technologies? <br />• Do you need a customizable solution that supports real-time views, alerting, reporting and even technologies such as web services and XML? <br />• Is agent-based monitoring proving to be expensive to deploy, install, maintain, manage, and support? And does it cause unnecessary overhead and instability on production systems? <br />• Are large monitoring frameworks not delivering the value and return on investment you had expected? <br />Mercury SiteScope® is the industry’s first and leading agentless monitoring solution designed to ensure the availability and performance of distributed IT infrastructures — e.g., servers, operating systems, network devices, network services, applications, and application components. This proactive, Web-based infrastructure monitoring and network monitoring solution is lightweight, highly customizable, and doesn't require high overhead agents on your production systems. With Mercury SiteScope, you gain the real-time information you need to verify infrastructure operations, stay apprised of problems, and solve bottlenecks before they become critical. <br />With Mercury SiteScope, you can:<br />• Lower total cost of ownership (TCO) by consolidating support and maintenance tasks to a central agentless server. <br />• Reduce the need to track and remotely administer thousands of agents including their overhead on production systems. <br />• Save time and money by deploying Solution Templates - best practice-based groups of monitors for key applications. <br />• Eliminate the need for multiple solutions via SiteScope’s industry leading breadth of 65-plus supported monitoring targets. <br />• Leverage SiteScope’s flexible architecture to custom fit the solution to your company’s unique environment, now and in the future. <br /><br /> <br />View larger screen shot. <br />Who's Using It?<br /> <br />COLT's Telecom Group Harnesses Web Monitoring with Mercury Application Management Solutions.Rajesh Babu Rajamanickamhttp://www.blogger.com/profile/16036650026261051481noreply@blogger.com0tag:blogger.com,1999:blog-8180223062941193498.post-21793925634646299002008-01-14T16:35:00.001-08:002008-01-14T16:35:58.503-08:00QTP FAQ - New<span style="font-weight:bold;">QTP FAQ</span><br />1. What are the Features & Benefits of Quick Test Pro (QTP)? - Operates stand-alone, or integrated into Mercury Business Process Testing and Mercury Quality Center. Introduces next-generation zero-configuration Keyword Driven testing technology in Quick Test Professional 8.0 allowing for fast test creation, easier maintenance, and more powerful data-driving capability. Identifies objects with Unique Smart Object Recognition, even if they change from build to build, enabling reliable unattended script execution. Collapses test documentation and test creation to a single step with Auto-documentation technology. Enables thorough validation of applications through a full complement of checkpoints. <br />2. How to handle the exceptions using recovery scenario manager in QTP? - There are 4 trigger events during which a recovery scenario should be activated. A pop up window appears in an opened application during the test run: A property of an object changes its state or value, a step in the test does not run successfully, an open application fails during the test run, These triggers are considered as exceptions. You can instruct QTP to recover unexpected events or errors that occurred in your testing environment during test run. Recovery scenario manager provides a wizard that guides you through the defining recovery scenario. Recovery scenario has three steps: 1. Triggered Events 2. Recovery steps 3. Post Recovery Test-Run <br />3. What is the use of Text output value in QTP? - Output values enable to view the values that the application talks during run time. When parameterized, the values change for each iteration. Thus by creating output values, we can capture the values that the application takes for each run and output them to the data table. <br />4. How to use the Object spy in QTP? - There are two ways to Spy the objects in QTP: 1) Thru file toolbar, In the File Toolbar click on the last toolbar button (an icon showing a person with hat). 2) True Object repository Dialog, In Object repository dialog click on the button object spy. In the Object spy Dialog click on the button showing hand symbol. The pointer now changes in to a hand symbol and we have to point out the object to spy the state of the object if at all the object is not visible. or window is minimized then, hold the Ctrl button and activate the required window to and release the Ctrl button. <br />5. How Does Run time data (Parameterization) is handled in QTP? - You can then enter test data into the Data Table, an integrated spreadsheet with the full functionality of Excel, to manipulate data sets and create multiple test iterations, without programming, to expand test case coverage. Data can be typed in or imported from databases, spreadsheets, or text files. <br />6. What is keyword view and Expert view in QTP? - Quick Test’s Keyword Driven approach, test automation experts have full access to the underlying test and object properties, via an integrated scripting and debugging environment that is round-trip synchronized with the Keyword View. Advanced testers can view and edit their tests in the Expert View, which reveals the underlying industry-standard VBScript that Quick Test Professional automatically generates. Any changes made in the Expert View are automatically synchronized with the Keyword View. <br />7. Explain about the Test Fusion Report of QTP? - Once a tester has run a test, a Test Fusion report displays all aspects of the test run: a high-level results overview, an expandable Tree View of the test specifying exactly where application failures occurred, the test data used, application screen shots for every step that highlight any discrepancies, and detailed explanations of each checkpoint pass and failure. By combining Test Fusion reports with Quick Test Professional, you can share reports across an entire QA and development team. <br />8. Which environments does QTP support? - Quick Test Professional supports functional testing of all enterprise environments, including Windows, Web,..NET, Java/J2EE, SAP, Siebel, Oracle, PeopleSoft, Visual Basic, ActiveX, mainframe terminal emulators, and Web services. <br />9. What is QTP? - Quick Test is a graphical interface record-playback automation tool. It is able to work with any web, java or windows client application. Quick Test enables you to test standard web objects and ActiveX controls. In addition to these environments, Quick Test Professional also enables you to test Java applets and applications and multimedia objects on Applications as well as standard Windows applications, Visual Basic 6 applications and.NET framework applications <br />10. Explain QTP Testing process? - Quick Test testing process consists of 6 main phases: <br />11. Create your test plan - Prior to automating there should be a detailed description of the test including the exact steps to follow, data to be input, and all items to be verified by the test. The verification information should include both data validations and existence or state verifications of objects in the application. <br />12. Recording a session on your application - As you navigate through your application, Quick Test graphically displays each step you perform in the form of a collapsible icon-based test tree. A step is any user action that causes or makes a change in your site, such as clicking a link or image, or entering data in a form. <br />13. Enhancing your test - Inserting checkpoints into your test lets you search for a specific value of a page, object or text string, which helps you identify whether or not your application is functioning correctly. NOTE: Checkpoints can be added to a test as you record it or after the fact via the Active Screen. It is much easier and faster to add the checkpoints during the recording process. Broadening the scope of your test by replacing fixed values with parameters lets you check how your application performs the same operations with multiple sets of data. Adding logic and conditional statements to your test enables you to add sophisticated checks to your test. <br />14. Debugging your test - If changes were made to the script, you need to debug it to check that it operates smoothly and without interruption. <br />15. Running your test on a new version of your application - You run a test to check the behavior of your application. While running, Quick Test connects to your application and performs each step in your test. <br />16. Analyzing the test results - You examine the test results to pinpoint defects in your application. <br />17. Reporting defects - As you encounter failures in the application when analyzing test results, you will create defect reports in Defect Reporting Tool. <br />18. Explain the QTP Tool interface. - It contains the following key elements: Title bar, displaying the name of the currently open test, Menu bar, displaying menus of Quick Test commands, File toolbar, containing buttons to assist you in managing tests, Test toolbar, containing buttons used while creating and maintaining tests, Debug toolbar, containing buttons used while debugging tests. Note: The Debug toolbar is not displayed when you open Quick Test for the first time. You can display the Debug toolbar by choosing View — Toolbars — Debug. Action toolbar, containing buttons and a list of actions, enabling you to view the details of an individual action or the entire test flow. Note: The Action toolbar is not displayed when you open Quick Test for the first time. You can display the Action toolbar by choosing View — Toolbars — Action. If you insert a reusable or external action in a test, the Action toolbar is displayed automatically. Test pane, containing two tabs to view your test-the Tree View and the Expert View, Test Details pane, containing the Active Screen. Data Table, containing two tabs, Global and Action, to assist you in parameterizing your test. Debug Viewer pane, containing three tabs to assist you in debugging your test-Watch Expressions, Variables, and Command. (The Debug Viewer pane can be opened only when a test run pauses at a breakpoint.) Status bar, displaying the status of the test <br />19. How does QTP recognize Objects in AUT? - Quick Test stores the definitions for application objects in a file called the Object Repository. As you record your test, Quick Test will add an entry for each item you interact with. Each Object Repository entry will be identified by a logical name (determined automatically by Quick Test), and will contain a set of properties (type, name, etc) that uniquely identify each object. Each line in the Quick Test script will contain a reference to the object that you interacted with, a call to the appropriate method (set, click, check) and any parameters for that method (such as the value for a call to the set method). The references to objects in the script will all be identified by the logical name, rather than any physical, descriptive properties. <br />20. What are the types of Object Repositories in QTP? - Quick Test has two types of object repositories for storing object information: shared object repositories and action object repositories. You can choose which type of object repository you want to use as the default type for new tests, and you can change the default as necessary for each new test. The object repository per-action mode is the default setting. In this mode, Quick Test automatically creates an object repository file for each action in your test so that you can create and run tests without creating, choosing, or modifying object repository files. However, if you do modify values in an action object repository, your changes do not have any effect on other actions. Therefore, if the same test object exists in more than one action and you modify an object’s property values in one action, you may need to make the same change in every action (and any test) containing the object. <br />21. Explain the check points in QTP? - A checkpoint verifies that expected information is displayed in an Application while the test is running. You can add eight types of checkpoints to your test for standard web objects using QTP. A page checkpoint checks the characteristics of an Application. A text checkpoint checks that a text string is displayed in the appropriate place on an Application. An object checkpoint (Standard) checks the values of an object on an Application. An image checkpoint checks the values of an image on an Application. A table checkpoint checks information within a table on a Application. An Accessibilityy checkpoint checks the web page for Section 508 compliance. An XML checkpoint checks the contents of individual XML data files or XML documents that are part of your Web application. A database checkpoint checks the contents of databases accessed by your web site <br />22. In how many ways we can add check points to an application using QTP? - We can add checkpoints while recording the application or we can add after recording is completed using Active screen (Note : To perform the second one The Active screen must be enabled while recording). <br />23. How does QTP identify objects in the application? - QTP identifies the object in the application by Logical Name and Class. <br />24. What is Parameterizing Tests? - When you test your application, you may want to check how it performs the same operations with multiple sets of data. For example, suppose you want to check how your application responds to ten separate sets of data. You could record ten separate tests, each with its own set of data. Alternatively, you can create a parameterized test that runs ten times: each time the test runs, it uses a different set of data. <br />25. What is test object model in QTP? - The test object model is a large set of object types or classes that Quick Test uses to represent the objects in your application. Each test object class has a list of properties that can uniquely identify objects of that class and a set of relevant methods that Quick Test can record for it. A test object is an object that Quick Test creates in the test or component to represent the actual object in your application. Quick Test stores information about the object that will help it identify and check the object during the run session. <br />26. What is Object Spy in QTP? - Using the Object Spy, you can view the properties of any object in an open application. You use the Object Spy pointer to point to an object. The Object Spy displays the selected object’s hierarchy tree and its properties and values in the Properties tab of the Object Spy dialog box. <br />27. What is the Diff between Image check-point and Bit map Check point? - Image checkpoints enable you to check the properties of a Web image. You can check an area of a Web page or application as a bitmap. While creating a test or component, you specify the area you want to check by selecting an object. You can check an entire object or any area within an object. Quick Test captures the specified object as a bitmap, and inserts a checkpoint in the test or component. You can also choose to save only the selected area of the object with your test or component in order to save disk Space. For example, suppose you have a Web site that can display a map of a city the user specifies. The map has control keys for zooming. You can record the new map that is displayed after one click on the control key that zooms in the map. Using the bitmap checkpoint, you can check that the map zooms in correctly. You can create bitmap checkpoints for all supported testing environments (as long as the appropriate add-ins are loaded). Note: The results of bitmap checkpoints may be affected by factors such as operating system, screen resolution, and color settings. <br />28. How many ways we can parameterize data in QTP? - There are four types of parameters: Test, action or component parameters enable you to use values passed from your test or component, or values from other actions in your test. Data Table parameters enable you to create a data-driven test (or action) that runs several times using the data you supply. In each repetition, or iteration, Quick Test uses a different value from the Data Table. Environment variable parameters enable you to use variable values from other sources during the run session. These may be values you supply, or values that Quick Test generates for you based on conditions and options you choose. Random number parameters enable you to insert random numbers as values in your test or component. For example, to check how your application handles small and large ticket orders, you can have Quick Test generate a random number and insert it in a number of tickets edit field. <br />29. How do u do batch testing in WR & is it possible to do in QTP, if so explain? - Batch Testing in WR is nothing but running the whole test set by selecting Run Test set from the Execution Grid. The same is possible with QTP also. If our test cases are automated then by selecting Run Test set all the test scripts can be executed. In this process the Scripts get executed one by one by keeping all the remaining scripts in Waiting mode. <br />30. If I give some thousand tests to execute in 2 days what do u do? - Adhoc testing is done. It Covers the least basic functionalities to verify that the system is working fine. <br />31. What does it mean when a check point is in red color? what do u do? - A red color indicates failure. Here we analyze the cause for failure whether it is a Script Issue or Environment Issue or a Application issue. <br />32. What is Object Spy in QTP? - Using the Object Spy, you can view the properties of any object in an open application. You use the Object Spy pointer to point to an object. The Object Spy displays the selected object’s hierarchy tree and its properties and values in the Properties tab of the Object Spy dialog box. <br />33. What is the file extension of the code file & object repository file in QTP? - Code file extension is.vbs and object repository is.tsr <br />34. Explain the concept of object repository & how QTP recognizes objects? - Object Repository: displays a tree of all objects in the current component or in the current action or entire test (depending on the object repository mode you selected). We can view or modify the test object description of any test object in the repository or to add new objects to the repository. Quicktest learns the default property values and determines in which test object class it fits. If it is not enough it adds assistive properties, one by one to the description until it has compiled the unique description. If no assistive properties are available, then it adds a special Ordinal identifier such as objects location on the page or in the source code. <br />35. What are the properties you would use for identifying a browser & page when using descriptive programming? - Name would be another property apart from title that we can use. <br />36. Give me an example where you have used a COM interface in your QTP project? - com interface appears in the scenario of front end and back end. for eg:if you r using oracle as back end and front end as VB or any language then for better compatibility we will go for an interface. of which COM will be one among those interfaces. Create object creates handle to the instance of the specified object so that we program can use the methods on the specified object. It is used for implementing Automation(as defined by Microsoft). <br />37. Explain in brief about the QTP Automation Object Model. - Essentially all configuration and run functionality provided via the Quick Test interface is in some way represented in the Quick Test automation object model via objects, methods, and properties. Although a one-on-one comparison cannot always be made, most dialog boxes in Quick Test have a corresponding automation object, most options in dialog boxes can be set and/or retrieved using the corresponding object property, and most menu commands and other operations have corresponding automation methods. You can use the objects, methods, and properties exposed by the Quick Test automation object model, along with standard programming elements such as loops and conditional statements to design your program.Rajesh Babu Rajamanickamhttp://www.blogger.com/profile/16036650026261051481noreply@blogger.com0