Monday, January 14, 2008

Mercury QuickTest Professional- Features - Benefits

Mercury QuickTest Professional-
Features & Benefits

• Ensure immediate return on investment through industry-leading ease of use and pre-configured environment support.
• Operate stand-alone, or integrated into Mercury Business Process Testing and Mercury Quality Center.
• Introduce next-generation "zero-configuration" Keyword Driven testing technology in QuickTest Professional — allowing for fast test creation, easier maintenance, and more powerful data-driving capability.
• Promote collaboration and sharing of test assets among testing groups through enterprise class object repository.
• Identify objects with Unique Smart Object Recognition, even if they change from build to build, enabling reliable unattended script execution.
• Manage multiple object repositories with ease to facilitate the building of automation frameworks and libraries.
• Handle unforeseen application events with Recovery Manager, facilitating 24x7 testing to meet test project deadlines.
• Reduce time to resolve defects by automatically reproducing defects and identify problems with the built-in test execution recorder.
• Collapse test documentation and test creation to a single step with Auto-documentation technology.
• Easily data-drive any object definition, method, checkpoint, and output value via the Integrated Data Table.
• Provide a complete IDE environment for QA engineers.
• Preserve your investments in Mercury WinRunner by leveraging existing test scripts written in the Test Scripting Language (TSL) with assets from the QuickTest Professional/WinRunner integration.
• Provide detailed, step by step report – now with video.
• Enable thorough validation of applications through a full complement of checkpoints.
Mercury QuickTest Professional _ How it Works
Mercury QuickTest Professional™ allows even novice testers to be productive in minutes. You can create a test script by simply pressing a Record button and using an application to perform a typical business process. Each step in the business process is automated documented with a plain-English sentence and screen shot. Users can easily modify, remove, or rearrange test steps in the Keyword View.
QuickTest Professional can automatically introduce checkpoints to verify application properties and functionality, for example to validate output or check link validity. For each step in the Keyword View, there is an ActiveScreen showing exactly how the application under test looked at that step. You can also add several types of checkpoints for any object to verify that components behave as expected, simply by clicking on that object in the ActiveScreen.
You can then enter test data into the Data Table, an integrated spreadsheet with the full functionality of Excel, to manipulate data sets and create multiple test iterations, without programming, to expand test case coverage. Data can be typed in or imported from databases, spreadsheets, or text files.
Advanced testers can view and edit their test scripts in the Expert View, which reveals the underlying industry-standard VBScript that QuickTest Professional automatically generates. Any changes made in the Expert View are automatically synchronized with the Keyword View.
Once a tester has run a script, a TestFusion report displays all aspects of the test run: a high-level results overview, an expandable Tree View of the test script specifying exactly where application failures occurred, the test data used, application screen shots for every step that highlight any discrepancies, and detailed explanations of each checkpoint pass and failure. By combining TestFusion reports with Mercury TestDirector, you can share reports across an entire QA and development team.
QuickTest Professional also facilitates the update process. As an application under test changes, such as when a “Login” button is renamed “Sign In,” you can make one update to the Shared Object Repository, and the update will propagate to all scripts that reference this object. You can publish test scripts to Mercury TestDirector, enabling other QA team members to reuse your test scripts, eliminating duplicative work.
QuickTest Professional supports functional testing of all popular environments, including Windows, Web, .Net, Visual Basic, ActiveX, Java, SAP, Siebel, Oracle, PeopleSoft, and terminal emulators.
What is the Diff between Image check-point and Bit map Check point?
Image checkpoints enable you to check the properties of a Web image. You can check an area of a Web page or application as a bitmap. While creating a test or component, you specify the area you want to check by selecting an object. You can check an entire object or any area within an object. QuickTest captures the specified object as a bitmap, and inserts a checkpoint in the test or component. You can also choose to save only the selected area of the object with your test or component in order to save disk Space For example, suppose you have a Web site that can display a map of a city the user specifies. The map has control keys for zooming. You can record the new map that is displayed after one click on the control key that zooms in the map. Using the bitmap checkpoint, you can check that the map zooms in correctly.

You can create bitmap checkpoints for all supported testing environments (as long as the appropriate add-ins are loaded).
Note: The results of bitmap checkpoints may be affected by factors such as operating system, screen resolution, and color settings.

Testing Mistakes

________________
Testing Mistakes


It's easy to make mistakes when testing software or planning a testing effort. Some
mistakes are made so often, so repeatedly, by so many different people, that they deserve
the label Classic Mistake.
Classic mistakes cluster usefully into five groups, which I’ve called “themes”:
· The Role of Testing: who does the testing team serve, and how does it do that?
· Planning the Testing Effort: how should the whole team’s work be organized?
· Personnel Issues: who should test?
· The Tester at Work: designing, writing, and maintaining individual tests.
· Technology Rampant: quick technological fixes for hard problems.
I have two goals for this paper. First, it should identify the mistakes, put them in context,
describe why they’re mistakes, and suggest alternatives. Because the context of one
mistake is usually prior mistakes, the paper is written in a narrative style rather than as a
list that can be read in any order. Second, the paper should be a handy checklist of
mistakes. For that reason, the classic mistakes are printed in a larger bold font when they
appear in the text, and they’re also summarized at the end.
Although many of these mistakes apply to all types of software projects, my specific focus
is the testing of commercial software products, not custom software or software that is
safety critical or mission critical.
This paper is essentially a series of bug reports for the testing process. You may think
some of them are features, not bugs. You may disagree with the severities I assign. You
may want more information to help in debugging, or want to volunteer information of
your own. Any decent bug reporting system will treat the original bug report as the first
part of a conversation. So should it be with this paper. Therefore, see
http://www.stlabs.com/marick/classic.htm for an ongoing discussion of this topic.
Theme One: The Role of Testing
A first major mistake people make is thinking that the testing team is responsible
for assuring quality. This role, often assigned to the first testing team in an
organization, makes it the last defense, the barrier between the development team
(accused of producing bad quality) and the customer (who must be protected from them).
It’s characterized by a testing team (often called the “Quality Assurance Group”) that has
Classic Testing Mistakes
2
formal authority to prevent shipment of the product. That in itself is a disheartening task:
the testing team can’t improve quality, only enforce a minimal level. Worse, that authority
is usually more apparent than real. Discovering that, together with the perverse incentives
of telling developers that quality is someone else’s job, leads to testing teams and testers
who are disillusioned, cynical, and view themselves as victims. We’ve learned from
Deming and others that products are better and cheaper to produce when everyone, at
every stage in development, is responsible for the quality of their work ([Deming86],
[Ishikawa85]).
In practice, whatever the formal role, most organizations believe that the purpose of
testing is to find bugs. This is a less pernicious definition than the previous one, but
it’s missing a key word. When I talk to programmers and development managers about
testers, one key sentence keeps coming up: “Testers aren’t finding the important
bugs.” Sometimes that’s just griping, sometimes it’s because the programmers have a
skewed sense of what’s important, but I regret to say that all too often it’s valid criticism.
Too many bug reports from testers are minor or irrelevant, and too many important bugs
are missed.
What’s an important bug? Important to whom? To a first approximation, the answer must
be “to customers”. Almost everyone will nod their head upon hearing this definition, but
do they mean it? Here’s a test of your organization’s maturity. Suppose your product is a
system that accepts email requests for service. As soon as a request is received, it sends a
reply that says “your request of 5/12/97 was accepted and its reference ID is NIC-051297-
3”. A tester who sends in many requests per day finds she has difficulty keeping track of
which request goes with which ID. She wishes that the original request were appended to
the acknowledgement. Furthermore, she realizes that some customers will also generate
many requests per day, so would also appreciate this feature. Would she:
1. file a bug report documenting a usability problem, with the expectation that it will be
assigned a reasonably high priority (because the fix is clearly useful to everyone,
important to some users, and easy to do)?
2. file a bug report with the expectation that it will be assigned “enhancement request”
priority and disappear forever into the bug database?
3. file a bug report that yields a “works as designed” resolution code, perhaps with an
email “nastygram” from a programmer or the development manager?
4. not bother with a bug report because it would end up in cases (2) or (3)?
If usability problems are not considered valid bugs, your project defines the
testing task too narrowly. Testers are restricted to checking whether the product does
what was intended, not whether what was intended is useful. Customers do not care
about the distinction, and testers shouldn’t either.
Testers are often the only people in the organization who use the system as heavily as an
expert. They notice usability problems that experts will see. (Formal usability testing
almost invariably concentrates on novice users.) Expert customers often don’t report
Classic Testing Mistakes
3
usability problems, because they’ve been trained to know it’s not worth their time.
Instead, they wait (in vain, perhaps) for a more usable product and switch to it. Testers
can prevent that lost revenue.
While defining the purpose of testing as “finding bugs important to customers” is a step
forward, it’s more restrictive than I like. It means that there is no focus on an
estimate of quality (and on the quality of that estimate). Consider these two
situations for a product with five subsystems.
1. 100 bugs are found in subsystem 1 before release. (For simplicity, assume that all bugs
are of the highest priority.) No bugs are found in the other subsystems. After release,
no bugs are reported in subsystem 1, but 12 bugs are found in each of the other
subsystems.
2. Before release, 50 bugs are found in subsystem 1. 6 bugs are found in each of the
other subsystems. After release, 50 bugs are found in subsystem 1 and 6 bugs in each
of the other subsystems.
From the “find important bugs” standpoint, the first testing effort was superior. It found
100 bugs before release, whereas the second found only 74. But I think you can make a
strong case that the second effort is more useful in practical terms. Let me restate the two
situations in terms of what a test manager might say before release:
1. “We have tested subsystem 1 very thoroughly, and we believe we’ve found almost all
of the priority 1 bugs. Unfortunately, we don’t know anything about the bugginess of
the remaining five subsystems.”
2. “We’ve tested all subsystems moderately thoroughly. Subsystem 1 is still very buggy.
The other subsystems are about 1/10th as buggy, though we’re sure bugs remain.”
This is, admittedly, an extreme example, but it demonstrates an important point. The
project manager has a tough decision: would it be better to hold on to the product for
more work, or should it be shipped now? Many factors - all rough estimates of possible
futures - have to be weighed: Will a competitor beat us to release and tie up the market?
Will dropping an unfinished feature to make it into a particular magazine’s special “Java
Development Environments” issue cause us to suffer in the review? Will critical customer
X be more annoyed by a schedule slip or by a shaky product? Will the product be buggy
enough that profits will be eaten up by support costs or, worse, a recall? 1
The testing team will serve the project manager better if it concentrates first on providing
estimates of product bugginess (reducing uncertainty), then on finding more of the bugs
that are estimated to be there. That affects test planning, the topic of the next theme.
It also affects status reporting. Test managers often err by reporting bug data
without putting it into context. Without context, project management tends to
focus on one graph:
1 Notice how none of the decisions depend solely on the product’s bugginess. That’s another reason why giving the
testing manager “stop ship” authority is a bad idea. He or she simply doesn’t have enough information to use that
authority wisely. The project manager might not have enough either, but won’t have less.
Classic Testing Mistakes
4
Bug Status
0
20
40
60
80
100
120
1
3
5
7
9
Build
Count
Bugs found
Bugs fixed
The flattening in the curve of bugs found will be interpreted in the most optimistic possible
way unless you as test manager explain the limitations of the data:
· “Only half the planned testing tasks have been finished, so little is known about half
the areas in the project. There could soon be a big spike in the number of bugs
found.”
· “That’s especially likely because the last two weekly builds have been lightly tested.
I told the testers to take their vacations now, before the project hits crunch mode.”
· “Furthermore, based on previous projects with similar amounts and kinds of testing
effort, it’s reasonable to expect at least 45 priority-1 bugs remain undiscovered.
Historically, that’s pretty high for a successful product.”
For discussions of using bug data, see [Cusumano95], [Rothman96], and [Marick97].
Earlier I asserted that testers can’t directly improve quality; they can only measure it.
That’s true only if you find yourself starting testing too late. Tests designed before
coding begins can improve quality. They inform the developer of the kinds of tests that
will be run, including the special cases that will be checked. The developer can use that
information while thinking about the design, during design inspections, and in his own
developer testing.2
Early test design can do more than prevent coding bugs. As will be discussed in the next
theme, many tests will represent user tasks. The process of designing them can find user
interface and usability problems before expensive rework is required. I’ve found problems
like no user-visible place for error messages to go, pluggable modules that didn’t fit
2 One person who worked in a pathologically broken organization told me that they were given the acceptance test in
advance. They coded the program to recognize the test cases and return the correct answer, bypassing completely
the logic that was supposed to calculate the answer. Few companies are that bad, but you could argue that
programmers will tend to produce code “trained” for the tests. If the tests are good, that’s not a problem - the code
is also trained for the real customers. The biggest danger is that the programmers will interpret the tests as narrow
special cases, rather than handling the more general situation. That can be forestalled by writing the early test
designs in terms of general situations rather than specific inputs: “more than two columns per page” rather than
“three two-inch columns on an A4 page”. Also, the tests given to the programmers will likely be supplemented by
others designed later.
Classic Testing Mistakes
5
together, two screens that had to be used together but could not be displayed
simultaneously, and “obvious” functions that couldn’t be performed. Test design fits
nicely into any usability engineering effort ([Nielsen93]) as a way of finding specification
bugs.
I should note that involving testing early feels unnatural to many programmers and
development managers. There may be feelings that you are intruding on their turf or not
giving them the chance to make the mistakes that are an essential part of design. Take
care, especially at first, not to increase their workload or slow them down. It may take
one or two entire projects to establish your credibility and usefulness.
Theme Two: Planning the Testing Effort
I’ll first discuss specific planning mistakes, then relate test planning to the role of testing.
It’s not unusual to see test plans biased toward functional testing. In functional
testing, particular features are tested in isolation. In a word processor, all the options for
printing would be applied, one after the other. Editing options would later get their own
set of tests.
But there are often interactions between features, and functional testing tends to miss
them. For example, you might never notice that the sequence of operations open a
document, edit the document, print the whole document, edit
one page, print that page doesn’t work. But customers surely will, because
they don’t use products functionally. They have a task orientation. To find the bugs that
customers see - that are important to customers - you need to write tests that cross
functional areas by mimicking typical user tasks. This type of testing is called scenario
testing, task-based testing, or use-case testing.
A bias toward functional testing also underemphasizes configuration testing.
Configuration testing checks how the product works on different hardware and when
combined with different third party software. There are typically many combinations that
need to be tried, requiring expensive labs stocked with hardware and much time spent
setting up tests, so configuration testing isn’t cheap. But, it’s worth it when you discover
that your standard in-house platform which “entirely conforms to industry standards”
actually behaves differently from most of the machines on the market.
Both configuration testing and scenario testing test global, cross-functional aspects of the
product. Another type of testing that spans the product checks how it behaves under
stress (a large number of transactions, very large transactions, a large number of
simultaneous transactions). Putting stress and load testing off to the last
minute is common, but it leaves you little time to do anything substantive when you
discover your product doesn’t scale up to more than 12 users.3
3 Failure to apply particular types of testing is another reason why developers complain that testers aren’t finding the
important bugs. Developers of an operating system could be spending all their time debugging crashes of their
private machines, crashes due to networking bugs under normal load. The testers are doing straight “functional
Classic Testing Mistakes
6
Two related mistakes are not testing the documentation and not testing
installation procedures. Testing the documentation means checking that all the
procedures and examples in the documentation work. Testing installation procedures is a
good way to avoid making a bad first impression.
How about avoiding testing altogether?
At a conference last year, I met (separately) two depressed testers who told me their
management was of the opinion that the World Wide Web could reduce testing costs.
“Look at [wildly successful internet company]. They distribute betas over the network
and get their customers to do the testing for free!” The Windows 95 beta program is also
cited in similar ways.
Beware of an overreliance on beta testing. Beta testing seems to give you test
cases representative of customer use - because the test cases are customer use. Also, bugs
reported by customers are by definition those important to customers. However, there are
several problems:
1. The customers probably aren’t that representative. In the common high-tech
marketing model4, beta users, especially those of the “put it on your web site and they
will download” sort, are the early adopters, those who like to tinker with new
technologies. They are not the pragmatists, those who want to wait until the
technology is proven and safe to adopt. The usage patterns of these two groups are
different, as are the kinds of bugs they consider important. In particular, early
adopters have a high tolerance for bugs with workarounds and for bugs that “just go
away” when they reload the program. Pragmatists, who are much less tolerant, make
up the large majority of the market.
2. Even of those beta users who actually use the product, most will not use it seriously.
They will give it the equivalent of a quick test drive, rather than taking the whole
family for a two week vacation. As any car buyer knows, the test drive often leaves
unpleasant features undiscovered.
3. Beta users - just like customers in general - don’t report usability problems unless
prompted. They simply silently decide they won’t buy the final version.
4. Beta users - just like customers in general - often won’t report a bug, especially if
they’re not sure what they did to cause it, or if they think it is obvious enough that
someone else must have already reported it.
5. When beta users report a bug, the bug report is often unusable. It costs much more
time and effort to handle a user bug report than one generated internally.
tests” on isolated machines, so they don’t find bugs. The bugs they do find are not more serious than crashes
(usually defined as highest severity for operating systems), and they’re probably less.
4 See [Moore91] or [Moore95]. I briefly describe this model in a review of Moore’s books, available through Pure
Atria’s book review pages (http://www.pureatria.com).
Classic Testing Mistakes
7
Beta programs can be useful, but they require careful planning and monitoring if they are
to do more than give a warm fuzzy feeling that at least some customers have used the
product before it’s inflicted on all of them. See [Kaner93] for a brief description.
The one situation in which beta programs are unequivocally useful is in configuration
testing. For any possible screwy configuration, you can find a beta user who has it. You
can do much more configuration testing than would be possible in an in-house lab (or even
perhaps an outsourced testing agency). Beta users won’t do as thorough a job as a trained
tester, but they’ll catch gross errors of the “BackupBuster doesn’t work on this brand of
‘compatible’ floppy tape drive” sort.
Beta programs are also useful for building word of mouth advertising, getting “first
glance” reviews in magazines, supporting third-party vendors who will build their product
on top of yours, and so on. Those are properly marketing activities, not testing.
Planning and replanning in support of the role of testing
Each of the types of testing described above, including functional testing, reduces
uncertainty about a particular aspect of the product. When done, you have confidence
that some functional areas are less buggy, others more. The product either usually works
on new configurations, or it doesn’t.5
There’s a natural tendency toward finishing one testing task before moving on
to the next, but that may lead you to discover bad news too late. It’s better to know
something about all areas than everything about a few. When you’ve discovered where the
problem areas lie, you can test them to greater depth as a way of helping the developers
raise the quality by finding the important bugs.6
Strictly, I’ve been over-simplistic in describing testing’s role as reducing uncertainty. It
would be better to say “risk-weighted uncertainty”. Some areas in the product are riskier
than others, perhaps because they’re used by more customers or because failures in that
area would be particularly severe. Riskier areas require more certainty. Failing to
correctly identify risky areas is a common mistake, and it leads to misallocated
testing effort. There are two sound approaches for identifying risky areas:
1. Ask everyone you can for their opinion. Gather data from developers, marketers,
technical writers, customer support people, and whatever customer representatives
5 I use “confidence” in its colloquial rather than its statistical sense. Conventional testing that searches specifically
for bugs does not allow you to make statements like “this product will run on 95±5% of Wintel machines”. In that
sense, it’s weaker than statistical or reliability testing, which uses statistical profiles of the customer environment
to both find bugs and make failure estimates. (See [Dyer92], [Lyu96], and [Musa87].) Statistical testing can be
difficult to apply, so I concentrate on a search for bugs as the way to get a usable estimate. A lack of statistical
validity doesn’t mean that bug numbers give you nothing but “warm and fuzzy (or cold and clammy) feelings”.
Given a modestly stable testing process, development process, and product line, bug numbers lead to distinctly
better decisions, even if they don’t come with p-values or statistical confidence intervals.
6 It’s expensive to test quality into the product, but it may be the only alternative. Code redesigns and rewrites may
not be an option.
Classic Testing Mistakes
8
you can find. See [Kaner96a] for a good description of this kind of collaborative test
planning.
2. Use historical data. Analyzing bug reports from past products (especially those from
customers, but also internal bug reports) helps tell you what areas to explore in this
project.
“So, winter’s early this year. We’re still going to invade Russia.”
Good testers are systematic and organized, yet they are exposed to all the chaos and twists
and turns and changes of plan typical of a software development project. In fact, the
chaos is magnified by the time it gets to testers, because of their position at the end of the
food chain and typically low status.7 One unfortunate reaction is sticking stubbornly
to the test plan. Emotionally, this can be very satisfying: “They can flail around
however they like, but I’m going to hunker down and do my job.” The problem is that
your job is not to write tests. It’s to find the bugs that matter in the areas of greatest
uncertainty and risk, and ignoring changes in the reality of the product and project can
mean that your testing becomes irrelevant.8
That’s not to say that testers should jump to readjust all their plans whenever there’s a
shift in the wind, but my experience is that more testers let their plans fossilize than
overreact to project change.
Theme Three: Personnel Issues
Fresh out of college, I got my first job as a tester. I had been hired as a developer, and
knew nothing about testing, but, as they said, “we don’t know enough about you yet, so
we’ll put you somewhere where you can’t do too much damage”. In due course, I
“graduated” to development.
Using testing as a transitional job for new programmers is one of the two
classic mistaken ways to staff a testing organization. It has some virtues. One is that you
really can keep bad hires away from the code. A bozo in testing is often less dangerous
than a bozo in development. Another is that the developer may learn something about
testing that will be useful later. (In my case, it founded a career.) And it’s a way for the
new hire to learn the product while still doing some useful work.
The advantages are outweighed by the disadvantage: the new hire can’t wait to get out of
testing. That’s hardly conducive to good work. You could argue that the testers have to
do good work to get “paroled”. Unfortunately, because people tend to be as impressed by
effort as by results, vigorous activity - especially activity that establishes credentials as a
7 How many proposed changes to a product are rejected because of their effect on the testing schedule? How often
does the effect on the testing team even cross a developer’s or marketer’s mind?
8 This is yet another reason why developers complain that testers aren’t finding the important bugs. Because of
market pressure, the project has shifted to an Internet focus, but the testers are still using and testing the old
“legacy” interface instead of the now critically important web browser interface.
Classic Testing Mistakes
9
programmer - becomes the way out. As a result, the fledgling tester does things like
become the expert in the local programmable editor or complicated freeware tool. That,
at least, is a potentially useful role, though it has nothing to do with testing. More
dangerous is vigorous but misdirected testing activity; namely, test automation. (See the
last theme.)
Even if novice testers were well guided, having so much of the testing staff be transients
could only work if testing is a shallow algorithmic discipline. In fact, good testers require
deep knowledge and experience.
The second classic mistake is recruiting testers from the ranks of failed
programmers. There are plenty of good testers who are not good programmers, but a
bad programmer likely has some work habits that will make him a bad tester, too. For
example, someone who makes lots of bugs because he’s inattentive to detail will miss lots
of bugs for the same reason.
So how should the testing team be staffed? If you’re willing to be part of the training
department, go ahead and accept new programmer hires.9 Accept as applicants
programmers who you suspect are rejects (some fraction of them really have gotten tired
of programming and want a change) but interview them as you would an outside hire.
When interviewing, concentrate less on formal qualifications than on intelligence and the
character of the candidate’s thought. A good tester has these qualities:10
· methodical and systematic.
· tactful and diplomatic (but firm when necessary).
· skeptical, especially about assumptions, and wants to see concrete evidence.
· able to notice and pursue odd details.
· good written and verbal skills (for explaining bugs clearly and concisely).
· a knack for anticipating what others are likely to misunderstand. (This is useful both in
finding bugs and writing bug reports.)
· a willingness to get one’s hands dirty, to experiment, to try something to see what
happens.
Be especially careful to avoid the trap of testers who are not domain experts.
Too often, the tester of an accounting package knows little about accounting.
Consequently, she finds bugs that are unimportant to accountants and misses ones that
are. Further, she writes bug reports that make serious bugs seem irrelevant. A
programmer may not see past the unrepresentative test to the underlying important
problem. (See the discussion of reporting bugs in the next theme.)
Domain experts may be hard to find. Try to find a few. And hire testers who are quick
studies and are good at understanding other people’s work patterns.
9 Some organizations rotate all developers through testing. Well, all developers except those with enough clout to
refuse. And sometimes people not in great demand don’t seem ever to rotate out. I’ve seen this approach work,
but it’s fragile.
10 See also the list in [Kaner93], chapter 15.
Classic Testing Mistakes
10
Two groups of people are readily at hand and often have those skills. But testing teams
often do not seek out applicants from the customer service staff or the
technical writing staff. The people who field email or phone problem reports
develop, if they’re good, a sense of what matters to the customer (at least to the vocal
customer) and the best are very quick on their mental feet.
Like testers, technical writers often also lack detailed domain knowledge. However,
they’re in the business of translating a product’s behavior into terms that make sense to a
user. Good technical writers develop a sense of what’s important, what’s confusing, and
so on. Those areas that are hard to explain are often fruitful sources of bugs. (What
confuses the user often also confuses the programmer.)
One reason these two groups are not tapped is an insistence that testers be able to
program. Programming skill brings with it certain advantages in bug hunting. A
programmer is more likely to find the number 2,147,483,648 interesting than an
accountant will. (It overflows a signed integer on most machines.) But such tricks of the
trade are easily learned by competent non-programmers, so not having them is a weak
reason for turning someone down.
If you hire according to these guidelines, you will avoid a testing team that lacks
diversity. All of the members will lack some skills, but the team as a whole will have
them all. Over time, in a team with mutual respect, the non-programmers will pick up
essential tidbits of programming knowledge, the programmers will pick up domain
knowledge, and the people with a writing background will teach the others how to
deconstruct documents.
All testers - but non-programmers especially - will be hampered by a physical
separation between developers and testers. A smooth working relationship
between developers and testers is essential to efficient testing. Too much valuable
information is unwritten; the tester finds it by talking to developers. Developers and
testers must often work together in debugging; that’s much harder to do remotely.
Developers often dismiss bug reports too readily, but it’s harder to do that to a tester you
eat lunch with.
Remote testing can be made to work - I’ve done it - but you have to be careful. Budget
money for frequent working visits, and pay attention to interpersonal issues.
Some believe that programmers can’t test their own code. On the face of it, this
is false: programmers test their code all the time, and they do find bugs. Just not enough
of them, which is why we need independent testers.
But if independent testers are testing, and programmers are testing (and inspecting), isn’t
there a potential duplication of effort? And isn’t that wasteful? I think the answer is yes.
Ideally, programmers would concentrate on the types of bugs they can find adequately
well, and independent testers would concentrate on the rest.
Classic Testing Mistakes
11
The bugs programmers can find well are those where their code does not do what they
intended. For example, a reasonably trained, reasonably motivated programmer can do a
perfectly fine job finding boundary conditions and checking whether each known
equivalence class is handled. What programmers do poorly is discovering overlooked
special cases (especially error cases), bugs due to the interaction of their code with other
people’s code (including system-wide properties like deadlocks and performance
problems), and usability problems.
Crudely put, good programmers do functional testing, and testers should do everything
else.11 Recall that I earlier claimed an over-concentration on functional testing is a classic
mistake. Decent programmer testing magnifies the damage it does.
Of course, decent programmer testing is relatively rare, because programmers are
neither trained nor motivated to test. This is changing, gradually, as companies
realize it’s cheaper to have bugs found and fixed quickly by one person, instead of more
slowly by two. Until then, testers must do both the testing that programmers can do and
the testing only testers can do, but must take care not to let functional testing squeeze out
the rest.
Theme Four: The Tester At Work
When testing, you must decide how to exercise the program, then do it. The doing is ever
so much more interesting than the deciding. A tester’s itch to start breaking the program is
as strong as a programmer’s itch to start writing code - and it has the same effect: design
work is skimped, and quality suffers. Paying more attention to running tests
than to designing them is a classic mistake. A tester who is not systematic, who does
not spend time laying out the possibilities in advance, will overlook special cases. They
may be the same subtle ones that the programmers overlooked.
Concentration on execution also results in unreviewed test designs. Just like
programmers, testers can benefit from a second pair of eyes. Reviews of test designs
needn’t be as elaborate as product design reviews, but a short check of the testing
approach and the resulting tests can find significant omissions at low cost.
What is a test design?
A test design should contain a description of the setup (including machine configuration
for a configuration test), inputs given to the product, and a description of expected results.
One common mistake is being too specific about test inputs and procedures.
Let’s assume manual test implementation for the moment. A related argument for
automated tests will be discussed in the next section. Suppose you’re testing a banking
application. Here are two possible test designs:
11 Independent testers will also provide a “safety net” for programmer testing. A certain amount of functional testing
might be planned, or it might be a side effect of the other types of testing being done.
Classic Testing Mistakes
12
Design 1
Setup: initialize the balance in account 12 with $100.
Procedure:
Start the program.
Type 12 in the Account window.
Press OK.
Click on the ‘Withdraw’ toolbar button.
In the withdraw popup dialog, click on the ‘all’ button.
Press OK.
Expect to see a confirmation popup that says “You are about to withdraw all the
money from this account. Continue?”
Press OK.
Expect to see a 0 balance in the account window.
Separately query the database to check that the zero balance has been posted.
Exit the program with File->Exit.
Design 2
Setup: initialize the balance with a positive value.
Procedure:
Start the program on that account.
Withdraw all the money from the account using the ‘all’ button.
It’s an error if the transaction happens without a confirmation popup.
Immediately thereafter:
- Expect a $0 balance to be displayed.
- Independently query the database to check that the zero balance has been posted.
The first design style has these advantages:
· The test will always be run the same way. You are more likely to be able to reproduce
the bug. So will the programmer.
· It details all the important expected results to check. Imprecise expected results make
failures harder to notice. For example, a tester using the second style would find it
easier to overlook a spelling error in the confirmation popup, or even that it was the
wrong popup.
· Unlike the second style, you always know exactly what you’ve tested. In the second
style, you couldn’t be sure that you’d ever gotten to the Withdraw dialog via the
toolbar. Maybe the menu was always used. Maybe the toolbar button doesn’t work at
all!
· By spelling out all inputs, the first style prevents testers from carelessly overusing
simple values. For example, a tester might always test accounts with $100, rather than
using a variety of small and large balances. (Either style should include explicit tests
for boundary and special values.)
However, there are also some disadvantages:
· The first style is more expensive to create.
Classic Testing Mistakes
13
· The inevitable minor changes to the user interface will break it, so it’s more expensive
to maintain.
· Because each run of the test is exactly the same, there’s no chance that a variation in
procedure will stumble across a bug.
· It’s hard for testers to follow a procedure exactly. When one makes a mistake -
pushes the wrong button, for example - will she really start over?
On balance, I believe the negatives often outweigh the positives, provided there is a
separate testing task to check that all the menu items and toolbar buttons are hooked up.
(Not only is a separate task more efficient, it’s less error-prone. You’re less likely to
accidentally omit some buttons.)
I do not mean to suggest that test cases should not be rigorous, only that they should be
no more rigorous than is justified, and that we testers sometimes error on the side of
uneconomical detail.
Detail in the expected results is less problematic than in the test procedure, but too much
detail can focus the tester’s attention too much on checking against the script he’s
following. That might encourage another classic mistake: not noticing and
exploring “irrelevant” oddities. Good testers are masters at noticing “something
funny” and acting on it. Perhaps there’s a brief flicker in some toolbar button which, when
investigated, reveals a crash. Perhaps an operation takes an oddly long time, which
suggests to the attentive tester that increasing the size of an “irrelevant” dataset might
cause the program to slow to a crawl. Good testing is a combination of following a script
and using it as a jumping-off point for an exploration of the product.
An important special case of overlooking bugs is checking that the product does
what it’s supposed to do, but not that it doesn’t do what it isn’t supposed
to do. As an example, suppose you have a program that updates a health care service’s
database of family records. A test adds a second child to Dawn Marick’s record. Almost
all testers would check that, after the update, Dawn now has two children. Some testers -
those who are clever, experienced, or subject matter experts - would check that Dawn
Marick’s spouse, Brian Marick, also now has two children. Relatively few testers would
check that no one else in the database has had a child added. They would miss a bug
where the programmer over-generalized and assumed that all “family information” updates
should be applied both to a patient and to all members of her family, giving Paul Marick
(aged 2) a child.
Ideally, every test should check that all data that should be modified has been modified
and that all other data has been unchanged. With forethought, that can be built into
automated tests. Complete checking may be impractical for manual tests, but occasional
quick scans for data that might be corrupted can be valuable.
Testing should not be isolated work
Here’s another version of the test we’ve been discussing:
Classic Testing Mistakes
14
Design 3
Withdraw all with confirmation and normal check for 0.
That means the same thing as Design 2 - but only to the original author. Test suites
that are understandable only by their owners are ubiquitous. They cause many
problems when their owners leave the company; sometimes many month’s worth of work
has to be thrown out.
I should note that designs as detailed as Designs 1 or 2 often suffer a similar problem.
Although they can be run by anyone, not everyone can update them when the product’s
interface changes. Because the tests do not list their purposes explicitly, updates can
easily make them test a little less than they used to. (Consider, for example, a suite of
tests in the Design 1 style: how hard will it be to make sure that all the user interface
controls are touched in the revised tests? Will the tester even know that’s a goal of the
suite?) Over time, this leads to what I call “test suite decay,” in which a suite full of tests
runs but no longer tests much of anything at all.12
Another classic mistake involves the boundary between the tester and programmer. Some
products are mostly user interface; everything they do is visible on the screen. Other
products are mostly internals; the user interface is a “thin pipe” that shows little of what
happens inside. The problem is that testing has to use that thin pipe to discover failures.
What if complicated internal processing produces only a “yes or no” answer? Any given
test case could trigger many internal faults that, through sheer bad luck, don’t produce the
wrong answer.13
In such situations, testers sometimes rely solely on programmer (“unit”) testing. In cases
where that’s not enough, testing only through the user-visible interface is a
mistake. It is far better to get the programmers to add “testability hooks” or “testpoints”
that reveal selected internal state. In essence, they convert a product like this:
Guts of the Product
User Interface
to one like this:
12 The purpose doesn’t need to be listed with the test. It may be better to have a central document describing the
purposes of a group of tests, perhaps in tabular form. Of course, then you have to keep that document up to date.
13 This is an example of the formal notion of “testability”. See, [Friedman95] or [Voas91] for an academic treatment.
Classic Testing Mistakes
15
Guts of the Product
User Interface
Testing
Interface
It is often difficult to convince programmers to add test support code to the product.
(Actual quote: “I don’t want to clutter up my code with testing crud.”) Persevere, start
modestly, and take advantage of these facts:
1. The test support code is often a simple extension of the debugging support code
programmers write anyway.14
2. A small amount of test support code often goes a long way.
A common objection to this approach is that the test support code must be compiled out
of the final product (to avoid slowing it down). If so, tests that use the testing interface
“aren’t testing what we ship”. It is true that some of the tests won’t run on the final
version, so you may miss bugs. But, without testability code, you’ll miss bugs that don’t
reveal themselves through the user interface. It’s a risk tradeoff, and I believe that adding
test support code usually wins. See [Marick95], chapter 13, for more details.
In one case, there’s an alternative to having the programmer add code to the product:
have a tool do it. Commercial tools like Purify, Boundschecker, and Sentinel
automatically add code that checks for certain classes of failures (such as memory leaks).15
They provide a narrow, specialized testing interface. For marketing reasons, these tools
are sold as programmer debugging tools, but they’re equally test support tools, and I’m
amazed that testing groups don’t use them as a matter of course.
Testability problems are exacerbated in distributed systems like conventional client/server
systems, multi-tiered client/server systems, Java applets that provide smart front-ends to
web sites, and so forth. Too often, tests of such systems amount to shallow tests of the
user interface component because that’s the only component that the tester can easily
control.
14 For example, the Java language encourages programmers to use the toString method to make internal objects
printable. A programmer doesn’t have to use it, since the debugger lets her see all the values in any object, but it
simplifies debugging for objects she’ll look at often. All testers need (roughly) is a way to call toString from
some external interface.
15 For a list of such commercial tools, see http://www.stlabs.com/marick/faqs/tools.htm. Follow the link to “Other
Test Implementation Tools”.
Classic Testing Mistakes
16
Finding failures is only the start
It’s not enough to find a failure; you must also report it. Unfortunately, poor bug
reporting is a classic mistake. Tester bug reports suffer from five major problems:
1. They do not describe how to reproduce the bug. Either no procedure is given, or the
given procedure doesn’t work. Either case will likely get the bug report shelved.
2. They don’t explain what went wrong. At what point in the procedure does the bug
occur? What should happen there? What actually happened?
3. They are not persuasive about the priority of the bug. Your job is to have the
seriousness of the bug accurately assessed. There’s a natural tendency for
programmers and managers to rate bugs as less serious than they are. If you believe a
bug is serious, explain why a customer would view it the way you do.16 If you found
the bug with an odd case, take the time to reproduce it with a more obviously common
or compelling case.
4. They do not help the programmer in debugging. This is a simple cost/benefit tradeoff.
A small amount of time spent simplifying the procedure for reproducing the bug or
exploring the various ways it could occur may save a great deal of programmer time.
5. They are insulting, so they poison the relationship between developers and testers.
[Kaner93] has an excellent chapter (5) on how to write bug reports. Read it.
Not all bug reports come from testers. Some come from customers. When that happens,
it’s common for a tester to write a regression test that reproduces the bug in the broken
version of the product. When the bug is fixed, that test is used to check that it was fixed
correctly.
However, adding only regression tests is not enough. A customer bug report
suggests two things:
1. That area of the product is buggy. It’s well known that bugs tend to cluster.17
2. That area of the product was inadequately tested. Otherwise, why did the bug
originally escape testing?
An appropriate response to several customer bug reports in an area is to schedule more
thorough testing for that area. Begin by examining the current tests (if they’re
understandable) to determine their systematic weaknesses.
Finally, every bug report is a gift from a customer that tells you how to test better in the
future. A common mistake is failing to take notes for the next testing effort.
16 Cem Kaner suggests something even better: have the person whose budget will be directly affected explain why
the bug is important. The customer service manager will speak more authoritatively about those installation bugs
than you could.
17 That’s true even if the bug report is due to a customer misunderstanding. Perhaps this area of the product is just
too hard to understand.
Classic Testing Mistakes
17
The next product will be somewhat like this one, the bugs will be somewhat like these, and
the tests useful in finding those bugs will also be somewhat like the ones you just ran.
Mental notes are easy to forget, and they’re hard to hand to a new tester. Writing is a
wonderful human invention: use it. Both [Kaner93] and [Marick95] describe formats for
archiving test information, and both contain general-purpose examples.
Theme Five: Technology Run Rampant
Test automation is based on a simple economic proposition:
· If a manual test costs $X to run the first time, it will cost just about $X to run each
time thereafter, whereas:
· If an automated test costs $Y to create, it will cost almost nothing to run from then
on.
$Y is bigger than $X. I’ve heard estimates ranging from 3 to 30 times as big, with the
most commonly cited number seeming to be 10. Suppose 10 is correct for your application
and your automation tools. Then you should automate any test that will be run more than
10 times.
A classic mistake is to ignore these economics, attempting to automate all tests,
even those that won’t be run often enough to justify it. What tests clearly justify
automation?
· Stress or load tests may be impossible to implement manually. Would you have a
tester execute and check a function 1000 times? Are you going to sit 100 people down
at 100 terminals?
· Nightly builds are becoming increasingly common. (See [McConnell96] or
[Cusumano95] for descriptions of the procedure.) If you build the product nightly,
you must have an automated “smoke test suite”. Smoke tests are those that are run
after every build to check for grievous errors.
· Configuration tests may be run on dozens of configurations.
The other kinds of tests are less clear-cut. Think hard about whether you’d rather have
automated tests that are run often or ten times as many manual tests, each run once.
Beware of irrational, emotional reasons for automating, such as testers who find
programming automated tests more fun, a perception that automated tests will lead to
higher status (everything else is “monkey testing”), or a fear of not rerunning a test that
would have found a bug (thus leading you to automate it, leaving you without enough
time to write a test that would have found a different bug).
You will likely end up in a compromise position, where you have:
1. a set of automated tests that are run often.
2. a well-documented set of manual tests. Subsets of these can be rerun as necessary.
For example, when a critical area of the system has been extensively changed, you
Classic Testing Mistakes
18
might rerun its manual tests. You might run different samples of this suite after each
major build. 18
3. a set of undocumented tests that were run once (including exploratory “bug bash”
tests).
Beware of expecting to rerun all manual tests. You will become bogged down
rerunning tests with low bug-finding value, leaving yourself no time to create new tests.
You will waste time documenting tests that don’t need to be documented.
You could automate more tests if you could lower the cost of creating them. That’s the
promise of using GUI capture/replay tools to reduce test creation cost. The
notion is that you simply execute a manual test, and the tool records what you do. When
you manually check the correctness of a value, the tool remembers that correct value.
You can then later play back the recording, and the tool will check whether all checked
values are the same as the remembered values.
There are two variants of such tools. What I call the first generation tools capture raw
mouse movements or keystrokes and take snapshots of the pixels on the screen. The
second generation tools (often called “object oriented”) reach into the program and
manipulate underlying data structures (widgets or controls).19
First generation tools produce unmaintainable tests. Whenever the screen layout changes
in the slightest way, the tests break. Mouse clicks are delivered to the wrong place, and
snapshots fail in irrelevant ways that nevertheless have to be checked. Because screen
layout changes are common, the constant manual updating of tests becomes insupportable.
Second generation tools are applicable only to tests where the underlying data structures
are useful. For example, they rarely apply to a photograph editing tool, where you need to
look at an actual image - at the actual bitmap. They also tend not to work with custom
controls. Heavy users of capture/replay tools seem to spend an inordinate amount of time
trying to get the tool to deal with the special features of their program - which raises the
cost of test automation.
Second generation tools do not guarantee maintainability either. Suppose a radio button is
changed to a pulldown list. All of the tests that use the old controls will now be broken.
GUI interface changes are of course common, especially between releases. Consider
carefully whether an automated test that must be recaptured after GUI changes is worth
having. Keep in mind that it can be hard to figure out what a captured test is attempting
to accomplish unless it is separately documented.
18 An additional benefit of automated tests is that they can be run faster than manual tests. That allows you to reduce
the time between completion of a build and completion of its testing. That can be especially important in the final
builds, if only to avoid pressure from executives itching to ship the product. You’re trading fewer tests for faster
time to market. That can be a reasonable tradeoff, but it doesn’t affect the core of my argument, which is that not
all tests should be automated.
19 These are, in effect, another example of tools that add test support code to the program.
Classic Testing Mistakes
19
As a rule of thumb, it’s dangerous to assume that an automated test will pay for itself this
release, so your test must be able to survive a reasonable level of GUI change. I believe
that capture/replay tests, of either generation, are rarely robust enough.
An alternative approach to capture/replay is scripting tests. (Most GUI capture/replay
tools also allow scripting.) Some member of the testing team writes a “test API”
(application programmer interface) that lets other members of the team express their tests
in less GUI-dependent terms. Whereas a captured test might look like this:
text $main.accountField “12”
click $main.OK
menu $operations
menu $withdraw
click $withdrawDialog.all
...
a script might look like this:
select-account 12
withdraw all
...
The script commands are subroutines that perform the appropriate mouse clicks and key
presses. If the API is well-designed, most GUI changes will require changes only to the
implementation of functions like withdraw, not to all the tests that use them.20 Please
note that well-designed test APIs are as hard to write as any other good API. That is,
they’re hard, and you shouldn’t expect to get it right the first time.
In a variant of this approach, the tests are data-driven. The tester provides a table
describing key values. Some tool reads the table and converts it to the appropriate mouse
clicks. The table is even less vulnerable to GUI changes because the sequence of
operations has been abstracted away. It’s also likely to be more understandable, especially
to domain experts who are not programmers. See [Pettichord96] for an example of datadriven
automated testing.
Note that these more abstract tests (whether scripted or data-driven) do not necessarily
test the user interface thoroughly. If the Withdraw dialog can be reached via several
routes (toolbar, menu item, hotkey), you don’t know whether each route has been tried.
You need a separate (most likely manual) effort to ensure that all the GUI components are
connected correctly.
Whatever approach you take, don’t fall into the trap of expecting regression tests to
find a high proportion of new bugs. Regression tests discover that new or
changed code breaks what used to work. While that happens more often than any of us
20 The “Joe Gittano” stories and essays on my web page, http://www.stlabs.com/marick/root.htm, go into this
approach in more detail.
Classic Testing Mistakes
20
would like, most bugs are in the product’s new or intentionally changed behavior. Those
bugs have to be caught by new tests.
I © code coverage
GUI capture/replay testing is appealing because it’s a quick fix for a difficult problem.
Another class of tool has the same kind of attraction.
The difficult problem is that it’s so hard to know if you’re doing a good job testing. You
only really find out once the product has shipped. Understandably, this makes managers
uncomfortable. Sometimes you find them embracing code coverage with the
devotion that only simple numbers can inspire. Testers sometimes also
become enamored of coverage, though their romance tends to be less fervent and ends
sooner.
What is code coverage? It is any of a number of measures of how thoroughly code is
exercised. One common measure counts how many statements have been executed by any
test. The appeal of such coverage is twofold:
1. If you’ve never exercised a line of code, you surely can’t have found any of its bugs.
So you should design tests to exercise every line of code.
2. Test suites are often too big, so you should throw out any test that doesn’t add value.
A test that adds no new coverage adds no value.
Only the first sentences in (1) and (2) are true. I’ll illustrate with this picture, where the
irregular splotches indicate bugs:
Tests needed
to find bugs
Tests
Needed
For
Coverage
If you write only the tests needed to satisfy coverage, you’ll find bugs. You’re guaranteed
to find the code that always fails, no matter how it’s executed. But most bugs depend on
how a line of code is executed. For example, code with an off-by-one error fails only
when you exercise a boundary. Code with a divide-by-zero error fails only if you divide
by zero. Coverage-adequate tests will find some of these bugs, by sheer dumb luck, but
not enough of them. To find enough bugs, you have to write additional tests that
“redundantly” execute the code.
Classic Testing Mistakes
21
For the same reason, removing tests from a regression test suite just because
they don’t add coverage is dangerous. The point is not to cover the code; it’s to have
tests that can discover enough of the bugs that are likely to be caused when the code is
changed. Unless the tests are ineptly designed, removing tests will just remove power. If
they are ineptly designed, using coverage converts a big and lousy test suite to a small and
lousy test suite. That’s progress, I suppose, but it’s addressing the wrong problem.21
A grave danger of code coverage is that it is concrete, objective, and easy to measure.
Many managers today are using coverage as a performance goal for testers.
Unfortunately, a cardinal rule of management applies here: “Tell me how a person is
evaluated, and I’ll tell you how he behaves.” If a person is evaluated by how much
coverage is achieved in a given time (or in how little time it takes to reach a particular
coverage goal), that person will tend to write tests to achieve high coverage in the fastest
way possible. Unfortunately, that means shortchanging careful test design that targets
bugs, and it certainly means avoiding in-depth, repetitive testing of “already covered”
code.22
Using coverage as a test design technique works only when the testers are both designing
poor tests and testing redundantly. They’d be better off at least targeting their poor tests
at new areas of code. In more normal situations, coverage as a guide to design only
decreases the value of the tests or puts testers under unproductive pressure to meet
unhelpful goals.
Coverage does play a role in testing, not as a guide to test design, but as a rough
evaluation of it. After you’ve run your tests, ask what their coverage is. If certain areas of
the code have no or low coverage, you’re sure to have tested them shallowly. If that
wasn’t intentional, you should improve the tests by rethinking their design. Coverage has
told you where your tests are weak, but it’s up to you to understand how.
You might not entirely ignore coverage. You might glance at the uncovered lines of code
(possibly assisted by the programmer) to discover the kinds of tests you omitted. For
example, you might scan the code to determine that you undertested a dialog box’s error
handling. Having done that, you step back and think of all the user errors the dialog box
should handle, not how to provoke the error checks on line 343, 354, and 399. By
rethinking design, you’ll not only execute those lines, you might also discover that several
other error checks are entirely missing. (Coverage can’t tell you how well you would
have exercised needed code that was left out of the program.)
21 Not all regression test suites have the same goals. Smoke tests are intended to run fast and find grievous, obvious
errors. A coverage-minimized test suite is entirely appropriate.
22 In pathological cases, you’d never bother with user scenario testing, load testing, or configuration testing, none of
which add much, if any, coverage to functional testing.
Classic Testing Mistakes
22
There are types of coverage that point more directly to design mistakes than statement
coverage does (branch coverage, for example).23 However, none - and not all of them put
together - are so accurate that they can be used as test design techniques.
One final note: Romances with coverage don’t seem to end with the former devotee
wanting to be “just good friends”. When, at the end of a year’s use of coverage, it has not
solved the testing problem, I find testing groups abandoning coverage entirely.
That’s a shame. When I test, I spend somewhat less than 5% of my time looking at
coverage results, rethinking my test design, and writing some new tests to correct my
mistakes. It’s time well spent.
Acknowledgements
My discussions about testing with Cem Kaner have always been illuminating. The
LAWST (Los Altos Workshop on Software Testing) participants said many interesting
things about automated GUI testing. The LAWST participants were Chris Agruss, Tom
Arnold, James Bach, Jim Brooks, Doug Hoffman, Cem Kaner, Brian Lawrence, Tom
Lindemuth, Noel Nyman, Brett Pettichord, Drew Pritsker, and Melora Svoboda. Paul
Czyzewski, Peggy Fouts, Cem Kaner, Eric Petersen, Joe Strazzere, Melora Svoboda, and
Stephanie Young read an earlier draft.
References
[Cusumano95]
M. Cusumano and R. Selby, Microsoft Secrets, Free Press, 1995.
[Dyer92]
Michael Dyer, The Cleanroom Approach to Quality Software Development,
Wiley, 1992.
[Friedman95]
M. Friedman and J. Voas, Software Assessment: Reliability, Safety, Testability,
Wiley, 1995.
[Kaner93]
C. Kaner, J. Falk, and H.Q. Nguyen, Testing Computer Software (2/e), Van
Nostrand Reinhold, 1993.
[Kaner96a]
Cem Kaner, “Negotiating Testing Resources: A Collaborative Approach,” a
position paper for the panel session on “How to Save Time and Money in
Testing”, in Proceedings of the Ninth International Quality Week (Software
Research, San Francisco, CA), 1996. (http://www.kaner.com/negotiate.htm)
[Kaner96b]
Cem Kaner, “Software Negligence & Testing Coverage,” in Proceedings of STAR
96, (Software Quality Engineering, Jacksonville, FL), 1996.
(http://www.kaner.com/coverage.htm)
23 See [Marick95], chapter 7, for a description of additional code coverage measures. See also [Kaner96b] for a list of
more than one hundred types of coverage.
Classic Testing Mistakes
23
[Lyu96]
Michael R. Lyu (ed.), Handbook of Software Reliability Engineering, McGraw-
Hill, 1996.
[Marick95]
Brian Marick, The Craft of Software Testing, Prentice Hall, 1995.
[Marick97]
Brian Marick, “The Test Manager at the Project Status Meeting,” in Proceedings
of the Tenth International Quality Week (Software Research, San Francisco, CA),
1997. (http://www.stlabs.com/~marick/root.htm)
[McConnell96]
Steve McConnell, Rapid Development, Microsoft Press, 1996.
[Moore91]
Geoffrey A. Moore, Crossing the Chasm, Harper Collins, 1991.
[Moore95]
Geoffrey A. Moore, Inside the Tornado, Harper Collins, 1995.
[Musa87]
J. Musa, A. Iannino, and K. Okumoto, Software Reliability : Measurement,
Prediction, Application, McGraw-Hill, 1987.
[Nielsen93]
Jakob Nielsen, Usability Engineering, Academic Press, 1993.
[Pettichord96]
Bret Pettichord, “Success with Test Automation,” in Proceedings of the Ninth
International Quality Week (Software Research, San Francisco, CA), 1996.
(http://www.io.com/~wazmo/succpap.htm)
[Rothman96]
Johanna Rothman, “Measurements to Reduce Risk in Product Ship Decisions,” in
Proceedings of the Ninth International Quality Week (Software Research, San
Francisco, CA), 1996. (http://world.std.com/~jr/Papers/QW96.html)
[Voas91]
J. Voas, L. Morell, and K. Miller, “Predicting Where Faults Can Hide from
Testing,” IEEE Software, March, 1991.
Classic Testing Mistakes
24
Some Classic Testing Mistakes
The role of testing
· Thinking the testing team is responsible for assuring quality.
· Thinking that the purpose of testing is to find bugs.
· Not finding the important bugs.
· Not reporting usability problems.
· No focus on an estimate of quality (and on the quality of that estimate).
· Reporting bug data without putting it into context.
· Starting testing too late (bug detection, not bug reduction)
Planning the complete testing effort
· A testing effort biased toward functional testing.
· Underemphasizing configuration testing.
· Putting stress and load testing off to the last minute.
· Not testing the documentation
· Not testing installation procedures.
· An overreliance on beta testing.
· Finishing one testing task before moving on to the next.
· Failing to correctly identify risky areas.
· Sticking stubbornly to the test plan.
Personnel issues
· Using testing as a transitional job for new programmers.
· Recruiting testers from the ranks of failed programmers.
· Testers are not domain experts.
· Not seeking candidates from the customer service staff or technical writing staff.
· Insisting that testers be able to program.
· A testing team that lacks diversity.
· A physical separation between developers and testers.
· Believing that programmers can’t test their own code.
· Programmers are neither trained nor motivated to test.
The tester at work
· Paying more attention to running tests than to designing them.
· Unreviewed test designs.
· Being too specific about test inputs and procedures.
· Not noticing and exploring “irrelevant” oddities.
· Checking that the product does what it’s supposed to do, but not that it doesn’t do
what it isn’t supposed to do.
· Test suites that are understandable only by their owners.
Classic Testing Mistakes
25
· Testing only through the user-visible interface.
· Poor bug reporting.
· Adding only regression tests when bugs are found.
· Failing to take notes for the next testing effort.
Test automation
· Attempting to automate all tests.
· Expecting to rerun manual tests.
· Using GUI capture/replay tools to reduce test creation cost.
· Expecting regression tests to find a high proportion of new bugs.
Code coverage
· Embracing code coverage with the devotion that only simple numbers can inspire.
· Removing tests from a regression test suite just because they don’t add coverage.
· Using coverage as a performance goal for testers.
· Abandoning coverage entirely.

..Test Methods...

..Test Methods...
1. What approach should be used for testing?
2. What are the Test Derivation Techniques?
3. How many different Test Types are there?
4. Why use Generic Test Objectives?
5. What are Quality Gates?
6. What Acceptance Criteria should be used?
7. Testing Metrics - Do you have examples?
8. Why use Test Scripts?
9. What tools are available for Test Support?
10. How-to Guides - What are they?
11. What are the 10 best steps for software testing?





1. What approach should be used for testing?

There are many approaches to software testing, but effective testing of complex products is essentially a process of investigation.
Common quality attributes include reliability, stability, portability, maintainability and usability.
Regardless of the methods used or level of formality involved the desired result of testing is a level of confidence in the software so that the developers are confident that the software has an acceptable defect rate.
When changes are made to software, a regression test ensures that the changes made in the current software do not affect the functionality of the existing software.
The role of highly skilled professionals in software development has never been more difficult - or more crucial - as organisations try to complete application development faster and more cost-effectively.
Test teams that use manual testing exclusively are struggling to keep up.
Because they cannot test all the code, they risk missing significant defects. At the same time, they cannot stop testing long enough to learn new skills.
Contact us for information about automated test tools.



2. What are the Test Derivation Techniques?

• Equivalence partitioning
• Boundary value analysis
• State transition testing
• Cause-effect graphing
• Syntax testing
• Statement testing
• Branch / decision testing
• Data flow testing
• Branch condition testing
• Branch condition combination testing
• Modified condition decision testing
• Business process
• Requirements coverage
• Use case derivation


3. How many different Test Types are there?

• Archive tests
• Clinical safety tests
• Compatibility and conversion tests
• Conformance tests
• Cutover tests
• Flood and volume tests
• Functional tests
• Installation and initialization tests
• Interoperability tests
• Load and stress tests
• Performance tests
• Portability tests
• End-to-end thread testing
• Recovery and restart
• Documentation tests / manual procedure tests
• Reliability / Robustness tests
• Security tests
• Temporal tests
• Black box / White box tests
• User interface tests / W3C WAI Accessibility testing


4. Why use Generic Test Objectives?

• Demonstrate component meets requirements
• Demonstrate component is ready to reuse in larger subsystems
• Demonstrate integrated components are correctly assembled or combined and collaborate
• Demonstrate system meets functional requirements
• Demonstrate system meets non-functional requirements
• Demonstrate system meets industry regulation requirements
• Demonstrate supplier meets contractual obligations
• Validate that system meets business or user requirements
• Demonstrate system, processes, and people meet business requirements


5. What are Quality Gates?

• The Quality Gate process is a formal way of specifying and recording the transition between stages in the project lifecycle
• Each Quality Gate details the deliverables required and actions to be completed and metrics associated with the Quality Gate
• All testing stages specify formal entry and exit criteria
• The Quality Gate review process verifies the specified acceptance criteria have been achieved


6. What Acceptance Criteria should be used?
In the context of the system to be released, good enough is achieved when all of the following apply:
• The release has sufficient benefits
• The release has no critical problems
• The benefits sufficiently outweigh the non-critical problems
• In the present situation, with all things considered, delaying the release to potentially further improve the system, would cause more harm than good


7. Testing Metrics - Do you have examples?

• Number of test cases
• Number of tests executed
• Number of tests passed
• Number of tests failed
• Number of re-tests
• Number of Requirements tested
• Number of Defects per lines of software code or per function
• Number of Defects found in computer file types (e.g. jav, aspx, xml, xslt, html, com, doc)



8. Why use Test Scripts?
• Test scripts are necessary to execute repeatable tests
• Can be manually executed
• Can be automatically executed
• Can be based on re-usable building blocks
• Are a constructive component in the testing process
• Provide traceability and documentation



9. What tools are available for Test Support?
• Test Asset Management Tool
• Functional test tool
• Non Functional test tool
• Monitoring tools (for soak testing and live monitoring)
• Consistent, company-wide, Defect Management Process
• Repeatable Test Execution Processes
• Timely Reporting
• Use Cases Documentation
• Test Harnesses
• Common Nomenclature in use by all
• How-to Guides



10. How-to Guides - What are they?
These are some of the possible How-to guides…
• How-to read Use Cases
• How-to scope each test
• How-to determine which test types are necessary
• How-to derive test conditions
• How-to prepare a test planner
• How-to write test cases
• How-to plan for Security testing
• How-to conduct WAI Accessibility testing
• How-to test Service Level Agreements
• How-to assess risks
• How-to raise, track and manage defects
• How-to create and maintain a regression test pack
• How-to setup and manage User Acceptance Testing


11. What are the 10 best steps for software testing?
1. Establish the Test Methodology you wish to follow ... E.g. ISEB
2. Establish the Test Principle ... E.g. Fail fast
3. Define the Requirements ... If there are no requirements then there is nothing to test
4. Document the Requirements Traceability matrix ... This should work in both directions
5. Define the specific tests which apply in your situation
6. Document the test plan
7. Document the test cases
8. Define the start of testing
9. Conduct testing
10. Define the point at which testing can stop ... When the benefit of continuing testing is outweighed by the effort of continuing testing

creation process of a test case template.

The purpose of this tutorial is to show a creation process of a test case template. Often, we create it in the wrong way, because we use the wrong field types, and this, in turn, increases the execution and maintenance process time.
In this tutorial I will review what works in testing and what doesn’t. I will then take the working pieces and fit them together into one template.
I presume that you have already read an article or a book about using the "Use Case" modeling technique.
If you haven’t you can find articles and tutorials by searching the Web, you can read "Writing Effective Use Cases", a book by Alistair Cockburn, or you can see my recommendations in the end of this lesson.
Pay attention to the extended Use Cases that can be the source for the TC’s.
Extended Use Cases includes:
• Business life cycle Use Case
• Supplementary specification with non-functional requirements that has:
• Table with all external operational variables
• Relative frequency of each operation
• Performance requirements
• Useful for testing UML diagrams
If you would like to read a book about creating TC – my suggestion would be to read "Introducing Software Testing: A Practical Guide to Getting Started" by Louise Tamres, 2002. In this book you will find a description with examples of creating test cases from use cases.
The information below was taken from accepted and identified sources and can be used for better understanding of my description. This information is necessary because some terms have various meanings in software testing, and I will therefore provide them to avoid misunderstanding.
The golden rules of software testing defined by Glenford J. Myers, [The Art of Software Testing, 1979]
• Testing: run program with intent to find an error
Test case (TC) A set of test inputs, executions, and expected results
developed for a particular objective.
Test Procedure. A document, providing detailed instructions for the [manual] execution of one or more test cases. [BS7925-1] Often called - a manual test script.
Test suite. A collection of test scripts or test cases that is used for validating bug fixes (or finding new bugs) within a logical or physical area of a product. [H. Q. Nguyen, 2001]
The test case description can be either documented manually or store in the test repository of an automated testing tool suite. If the test cases are documented automatically, the format and content will be limited to what the input forms and test repository can accept. [D.J. Mosley 2000]
In our case we assume that test cases must be documented manually.
Use Case (UC) A description of a set of sequences of actions, including variants, that a system performs that yields an observable result of value to an actor. [UML guide by G. Booch, 2001]

In order to select the fields that we will use in our template, let us first identify all possible field choices for the TC:
1. Project name and test suite ID and name
2. Use Case Name (name is usually an action like: Create the)
3. Version date and version number of the test cases
4. Version Author and contact information
5. Approval and distribution list
6. Revision history with reasons for update
7. Other sources and prerequisite information
8. Environment prerequisites (installation and network)
9. Test pre-conditions (data created before testing)
***************************************
10. TC name
11. TC number (ID)
12. Use Case scenario (main success scenario, flow, path, and branching action)
13. Type of Software Testing (i.e. functional, load, etc.)
14. Objectives
15. Initial conditions or preconditions
16. Valid or invalid conditions
17. Input data (ID, type, and values)
18. Test steps
19. Expected result
20. Clean up or post conditions
21. Comments
****************************************
22. Actual results (passed/fail)
23. Date
24. Tester
25. Type of machine
26. Type of OS etc.
27. Build release
28. Label name
29. Date of release
1. Select all fields that will be used in the Test Log document. From my experience, MS Excel is the best for a Test Log. The following fields are usually used in a test log document, but these fields sometimes mistakenly appear in the test case template.
o Actual results (passed/fail)
o Date
o Tester
o Type of machine
o Type of OS
o Build release
o Label name
o Date of release
2. Now select all fields that belong to the test suite and do not depend on small details. We will assume that for each use case, we will create a number of test cases in a separate test suite document. This information can be provided in the beginning of the test suite document:
o Project name and test suite ID and name
o Use Case Name (name is usually an action like: Create a…)
o Version date and version number of the test cases
o Version Author and contact information
o Approval and distribution list
o Revision history with reasons for update
o Environment prerequisites (installation and network)
o Test pre-conditions (data created before testing)
o Other sources and prerequisites information
o Clean up or post conditions
3. Choose all the necessary fields for the TC template from the remaining list:
1. TC name
2. TC number (ID)
3. Use Case scenario (main success scenario, flow etc.)
4. Type of Testing.
5. Objectives
6. Initial conditions or preconditions
7. Valid or invalid conditions (use the word Verify for valid conditions and Attempt to for TC with invalid data. This will help simplify verification and maintenance)
8. Input data
9. Test steps
10. Expected result
11. Comments
Let us choose only the necessary fields and combine some information like TC number, type of test, and project name in one field of template.
Remember: Adding additional fields to the template increases the amount of work to create and maintain the test suite. The project cost raises as well. Keep in mind that the same rules apply to the test suite and a test log document.
1. Test suite name; TC name; TC number (ID); type of testing;
2. Use Case scenario (main success scenario, flow etc.)
3. Objectives
4. Initial conditions or preconditions
5. Valid or invalid conditions (when it is possible, begin your description with the word Verify for valid conditions and input data and Attempt to for invalid. This will help you to simplify verification and maintenance of TC’s.)
6. Input data (ID, type, and values)
7. Test steps
8. Expected result
If you plan to use automation testing tools in the future, please review the following steps:
o Perform setup
o Perform the test
o Verify the results
o Log the results
o Handle unpredictable situations
o Decide to stop or continue the test case
o Perform cleanup
[D.J. Mosley 2000/2002]
I can’t resist to remind you Cem Kaner’s good practices of designing TC’s before showing the samples of templates. (More detailed description of creating a good TC may be the topic of a separate book.)
An excellent test case satisfies the following criteria:
• Reasonable probability of catching an error
• Exercises an area of interest
• Does interesting things
• Doesn’t do unnecessary things
• Neither too simple nor too complex
• Not redundant with other tests
• Makes failures obvious
• Allows isolation and identification of errors
[Cem Kaner " Black Box Software Testing -Professional seminar " 2002 section 8 "Test case design"]
Scripting:
An Industry Worst Practice
COMPLETE SCRIPTING is favored by people who believe that repeatability is everything and who believe that with repeatable scripts, we can delegate to cheap labor.
[Cem Kaner "Black Box Software Testing -Professional seminar " 2002 section 23 "scripting"]
The following are samples of templates with the fields that we previously chose.
For each unique test case number, I chose the following format:
XXX.XXX.-XXX
The description is:
XXX. XXX.- XXX
Name of the project (abbreviation) Type of testing Unique number.
If you are not using Use case modeling technique you can rename Use Case flow field into " Requirement under the Test".
Blank template:
TC # UC flow
Objectives


Preconditions Input (maybe for different conditions) Expected Results



Guidance for creating text in a template.

TC# Proj.Fun.-010 UC flow 2.2.2 main success scenario (Basic, alternative, exception flow name or function under test)
Objectives Try to use:
-Verify that (for TC with valid data)
-Attempt to (for TC with invalid data)
Preconditions Input Expected Results
-The system displays…
-User has successfully…
-The system allows…
-The user has been authenticated… (For different conditions where applicable)
-The user selects…
-The user enters… -Expected result may be copy-paste from Use Case but it depends on how the Use Case is written.
I'm giving the best advice I have. You have to decide what is suitable for your needs and modify template accordingly.

MANUAL TESTING - New

MANUAL TESTING

1. What is the testing process?
Verifying that an input data produce the expected output.
2. What is the difference between testing and debugging?
Big difference is that debugging is conducted by a programmer and the programmer fix the errors during debugging phase. Tester never fixes the errors, but rather fined them and return to programmer.
3. What is the difference between structural and functional testing?
Structural is a "white box" testing and based on the algorithm or code. Functional testing is a "black box" (behavioral) testing where the tester verifies the functional specification.
4. What is a bug? What types of bugs do you know?
Bug is a error during execution of the program. There are two types of bugs: syntax and logical.
5. What is the difference between testing and quality assurance (QA)?
This question is surprisingly popular. However, the answer is quite simple. The goals of both are different: The goal of testing is to find the errors. The goal of QA is to prevent the errors in the program.
6. What kinds of testing do you know? What is it system testing? What is it integration testing? What is a unit testing? What is a regression testing?
You theoretical background and home work may shine in this question. System testing is a testing of the entire system as a whole. This is what user see and feels about the product you provide. Integration testing is the testing of integration of different modules of the system. Usually, the integration process is quite painful and this testing is the most serious one of all. Integration testing comes before system testing. Unit testing is a testing of a single unit (module) of within system. It's conducted before integration testing. Regression testing is a "backward check" testing. The idea to ensure that new functionality added to the system did not break old, checked, functionality of the system.
7. What are all the major processes will involve in testing?
The major processes include:
1.Planning (test strategy, test objectives, risk management)
2.Design (functions to be tested, test scenario, test cases)
3Development (test procedures, test scripts, test environment)
4.Execution (execute test)
5.Evaluation (evaluate test results, compare actual results with expected results)
8. Could you test a program 100%? 90%? Why?
Definitely not! The major problem with testing that you cannot calculate how many errors are in the code, functioning etc. There are many factors involved such as experience of programmer, complexity of the system etc.
9. How would you test a mug (chair/table/gas station etc.)?
First of all you must demand requirements and functional specification and design document of the mug. There will find requirements like ability to hold hot water, waterproof, stability, break ability and so on. Then you should test the mug according to all documents.
10. How would you conduct your test?
Each test is based on the technical requirements of the software product.
11.What is the other name for white box testing?
Clear box testing
12.What is other name for water fall model?
Linear sequential model
13.What is considered a good test?
It should cover most of the object's functionality
14.What is 'Software Quality Assurance'?
Software QA involves the entire software development PROCESS - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with.
15.What is 'Software Testing'?
Testing involves operation of a system or application under controlled conditions and evaluating the results (eg, 'if the user is in interface A of the application while using hardware B, and does C, then D should happen'). The controlled conditions should include both normal and abnormal conditions. Testing should intentionally attempt to make things go wrong to determine if things happen when they shouldn't or things don't happen when they should.
16.What are some recent major computer system failures caused by software bugs?
• In March of 2002 it was reported that software bugs in Britain's national tax system resulted in more than 100,000 erroneous tax overcharges. The problem was partly attibuted to the difficulty of testing the integration of multiple systems.
• A newspaper columnist reported in July 2001 that a serious flaw was found in off-the-shelf software that had long been used in systems for tracking certain U.S. nuclear materials. The same software had been recently donated to another country to be used in tracking their own nuclear materials, and it was not until scientists in that country discovered the problem, and shared the information, that U.S. officials became aware of the problems.
• According to newspaper stories in mid-2001, a major systems development contractor was fired and sued over problems with a large retirement plan management system. According to the reports, the client claimed that system deliveries were late, the software had excessive defects, and it caused other systems to crash.
• In January of 2001 newspapers reported that a major European railroad was hit by the aftereffects of the Y2K bug. The company found that many of their newer trains would not run due to their inability to recognize the date '31/12/2000'; the trains were started by altering the control system's date settings.
• News reports in September of 2000 told of a software vendor settling a lawsuit with a large mortgage lender; the vendor had reportedly delivered an online mortgage processing system that did not meet specifications, was delivered late, and didn't work.
• In early 2000, major problems were reported with a new computer system in a large suburban U.S. public school district with 100,000+ students; problems included 10,000 erroneous report cards and students left stranded by failed class registration systems; the district's CIO was fired. The school district decided to reinstate it's original 25-year old system for at least a year until the bugs were worked out of the new system by the software vendors.
• In October of 1999 the $125 million NASA Mars Climate Orbiter spacecraft was believed to be lost in space due to a simple data conversion error. It was determined that spacecraft software used certain data in English units that should have been in metric units. Among other tasks, the orbiter was to serve as a communications relay for the Mars Polar Lander mission, which failed for unknown reasons in December 1999. Several investigating panels were convened to determine the process failures that allowed the error to go undetected.
• Bugs in software supporting a large commercial high-speed data network affected 70,000 business customers over a period of 8 days in August of 1999. Among those affected was the electronic trading system of the largest U.S. futures exchange, which was shut down for most of a week as a result of the outages.
• In April of 1999 a software bug caused the failure of a $1.2 billion military satellite launch, the costliest unmanned accident in the history of Cape Canaveral launches. The failure was the latest in a string of launch failures, triggering a complete military and industry review of U.S. space launch programs, including software integration and testing processes. Congressional oversight hearings were requested.
• A small town in Illinois received an unusually large monthly electric bill of $7 million in March of 1999. This was about 700 times larger than its normal bill. It turned out to be due to bugs in new software that had been purchased by the local power company to deal with Y2K software issues.
• In early 1999 a major computer game company recalled all copies of a popular new product due to software problems. The company made a public apology for releasing a product before it was ready.
• The computer system of a major online U.S. stock trading service failed during trading hours several times over a period of days in February of 1999 according to nationwide news reports. The problem was reportedly due to bugs in a software upgrade intended to speed online trade confirmations.
• In April of 1998 a major U.S. data communications network failed for 24 hours, crippling a large part of some U.S. credit card transaction authorization systems as well as other large U.S. bank, retail, and government data systems. The cause was eventually traced to a software bug.
• January 1998 news reports told of software problems at a major U.S. telecommunications company that resulted in no charges for long distance calls for a month for 400,000 customers. The problem went undetected until customers called up with questions about their bills.
• In November of 1997 the stock of a major health industry company dropped 60% due to reports of failures in computer billing systems, problems with a large database conversion, and inadequate software testing. It was reported that more than $100,000,000 in receivables had to be written off and that multi-million dollar fines were levied on the company by government agencies.
• A retail store chain filed suit in August of 1997 against a transaction processing system vendor (not a credit card company) due to the software's inability to handle credit cards with year 2000 expiration dates.
• In August of 1997 one of the leading consumer credit reporting companies reportedly shut down their new public web site after less than two days of operation due to software problems. The new site allowed web site visitors instant access, for a small fee, to their personal credit reports. However, a number of initial users ended up viewing each others' reports instead of their own, resulting in irate customers and nationwide publicity. The problem was attributed to "...unexpectedly high demand from consumers and faulty software that routed the files to the wrong computers."
• In November of 1996, newspapers reported that software bugs caused the 411 telephone information system of one of the U.S. RBOC's to fail for most of a day. Most of the 2000 operators had to search through phone books instead of using their 13,000,000-listing database. The bugs were introduced by new software modifications and the problem software had been installed on both the production and backup systems. A spokesman for the software vendor reportedly stated that 'It had nothing to do with the integrity of the software. It was human error.'
• On June 4 1996 the first flight of the European Space Agency's new Ariane 5 rocket failed shortly after launching, resulting in an estimated uninsured loss of a half billion dollars. It was reportedly due to the lack of exception handling of a floating-point error in a conversion from a 64-bit integer to a 16-bit signed integer.
• Software bugs caused the bank accounts of 823 customers of a major U.S. bank to be credited with $924,844,208.32 each in May of 1996, according to newspaper reports. The American Bankers Association claimed it was the largest such error in banking history. A bank spokesman said the programming errors were corrected and all funds were recovered.
• Software bugs in a Soviet early-warning monitoring system nearly brought on nuclear war in 1983, according to news reports in early 1999. The software was supposed to filter out false missile detections caused by Soviet satellites picking up sunlight reflections off cloud-tops, but failed to do so. Disaster was averted when a Soviet commander, based on a what he said was a '...funny feeling in my gut', decided the apparent missile attack was a false alarm. The filtering software code was rewritten.
17.Why is it often hard for management to get serious about quality assurance?
Solving problems is a high-visibility process; preventing problems is low-visibility. This is illustrated by an old parable:
In ancient China there was a family of healers, one of whom was known throughout the land and employed as a physician to a great lord. The physician was asked which of his family was the most skillful healer. He replied,
"I tend to the sick and dying with drastic and dramatic treatments, and on occasion someone is cured and my name gets out among the lords."
"My elder brother cures sickness when it just begins to take root, and his skills are known among the local peasants and neighbors."
"My eldest brother is able to sense the spirit of sickness and eradicate it before it takes form. His name is unknown outside our home."
18.Why does software have bugs?
•miscommunication or no communication - as to specifics of what an application should or shouldn't do (the application's requirements).
•software complexity - the complexity of current software applications can be difficult to comprehend for anyone without experience in modern-day software development. Windows-type interfaces, client-server and distributed applications, data communications, enormous relational databases, and sheer size of applications have all contributed to the exponential growth in software/system complexity. And the use of object-oriented techniques can complicate instead of simplify a project unless it is well-engineered.
•programming errors - programmers, like anyone else, can make mistakes.
• changing requirements - the customer may not understand the effects of changes, or may understand and request them anyway - redesign, rescheduling of engineers, effects on other projects, work already completed that may have to be redone or thrown out, hardware requirements that may be affected, etc. If there are many minor changes or any major changes, known and unknown dependencies among parts of the project are likely to interact and cause problems, and the complexity of keeping track of changes may result in errors. Enthusiasm of engineering staff may be affected. In some fast-changing business environments, continuously modified requirements may be a fact of life. In this case, management must understand the resulting risks, and QA and test engineers must adapt and plan for continuous extensive testing to keep the inevitable bugs from running out of control .
• time pressures - scheduling of software projects is difficult at best, often requiring a lot of guesswork. When deadlines loom and the crunch comes, mistakes will be made.
• egos - people prefer to say things like:
'no problem' 'piece of cake'
'I can whip that out in a few hours'
'it should be easy to update that old code'
instead of:'that adds a lot of complexity and we could end up
making a lot of mistakes'
'we have no idea if we can do that; we'll wing it'
'I can't estimate how long it will take, until I
take a close look at it'
'we can't figure out what that old spaghetti code
did in the first place'
If there are too many unrealistic 'no problem's', the
result is bugs.
• poorly documented code - it's tough to maintain and modify code that is badly written or poorly documented; the result is bugs. In many organizations management provides no incentive for programmers to document their code or write clear, understandable code. In fact, it's usually the opposite: they get points mostly for quickly turning out code, and there's job security if nobody else can understand it ('if it was hard to write, it should be hard to read').
•software development tools - visual tools, class libraries, compilers, scripting tools, etc. often introduce their own bugs or are poorly documented, resulting in added bugs.
19.How can new Software QA processes be introduced in an existing organization?
•A lot depends on the size of the organization and the risks involved. For large organizations with high-risk (in terms of lives or property) projects, serious management buy-in is required and a formalized QA process is necessary.
•Where the risk is lower, management and organizational buy-in and QA implementation may be a slower, step-at-a-time process. QA processes should be balanced with productivity so as to keep bureaucracy from getting out of hand.
•For small groups or projects, a more ad-hoc process may be appropriate, depending on the type of customers and projects. A lot will depend on team leads or managers, feedback to developers, and ensuring adequate communications among customers, managers, developers, and testers.
•In all cases the most value for effort will be in requirements management processes, with a goal of clear, complete, testable requirement specifications or expectations.
20.What is verification? validation?
Verification typically involves reviews and meetings to evaluate documents, plans, code, requirements, and specifications. This can be done with checklists, issues lists, walkthroughs, and inspection meetings. Validation typically involves actual testing and takes place after verifications are completed. The term 'IV & V' refers to Independent Verification and Validation.
21.What is a 'walkthrough'?
A 'walkthrough' is an informal meeting for evaluation or informational purposes. Little or no preparation is usually required.
22.What's an 'inspection'?
An inspection is more formalized than a 'walkthrough', typically with 3-8 people including a moderator, reader, and a recorder to take notes. The subject of the inspection is typically a document such as a requirements spec or a test plan, and the purpose is to find problems and see what's missing, not to fix anything. Attendees should prepare for this type of meeting by reading thru the document; most problems will be found during this preparation. The result of the inspection meeting should be a written report. Thorough preparation for inspections is difficult, painstaking work, but is one of the most cost effective methods of ensuring quality. Employees who are most skilled at inspections are like the 'eldest brother' in the parable in Their skill may have low visibility but they are extremely valuable to any software development organization, since bug prevention is far more cost-effective than bug detection.
23.What kinds of testing should be considered?
•Black box testing - not based on any knowledge of internal design or code. Tests are based on requirements and functionality.
•White box testing - based on knowledge of the internal logic of an application's code. Tests are based on coverage of code statements, branches, paths, conditions.
•unit testing - the most 'micro' scale of testing; to test particular functions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code; may require developing test driver modules or test harnesses.
•incremental integration testing - continuous testing of an application as new functionality is added; requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers.
•integration testing - testing of combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.
•functional testing - black-box type testing geared to functional requirements of an application; this type of testing should be done by testers. This doesn't mean that the programmers shouldn't check that their code works before releasing it (which of course applies to any stage of testing.)
•system testing - black-box type testing that is based on overall requirements specifications; covers all combined parts of a system.
•end-to-end testing - similar to system testing; the 'macro' end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
•sanity testing - typically an initial testing effort to determine if a new software version is performing
•well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or destroying databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state.
•regression testing - re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing.
•acceptance testing - final testing based on specifications of the end-user or customer, or based on use by end-users/customers over some limited period of time.
•load testing - testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system's response time degrades or fails.
•stress testing - term often used interchangeably with 'load' and 'performance' testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc.
•performance testing - term often used interchangeably with 'stress' and 'load' testing. Ideally 'performance' testing (and any other 'type' of testing) is defined in requirements documentation or QA or Test Plans.
•usability testing - testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.
•install/uninstall testing - testing of full, partial, or upgrade install/uninstall processes.
•recovery testing - testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.
•security testing - testing how well the system protects against unauthorized internal or external access, willful damage, etc; may require sophisticated testing techniques.
•compatability testing - testing how well software performs in a particular hardware/software/operating system/network/etc. environment.
•exploratory testing - often taken to mean a creative, informal software test that is not based on formal test plans or test cases; testers may be learning the software as they test it.
•ad-hoc testing - similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software before testing it.
•user acceptance testing - determining if software is satisfactory to an end-user or customer.
•comparison testing - comparing software weaknesses and strengths to competing products.
•alpha testing - testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers.
•beta testing - testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers.
•mutation testing - a method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes ('bugs') and retesting with the original test data/cases to determine if the 'bugs' are detected. Proper implementation requires large computational resources.
24.What are 5 common problems in the software development process?
•poor requirements - if requirements are unclear, incomplete, too general, or not testable, there will be problems.
•unrealistic schedule - if too much work is crammed in too little time, problems are inevitable.
•inadequate testing - no one will know whether or not the program is any good until the customer complains or systems crash.
•featuritis - requests to pile on new features after development is underway; extremely common.
•miscommunication - if developers don't know what's needed or customer's have erroneous expectations, problems are guaranteed.
25.What are 5 common solutions to software development problems?
•solid requirements - clear, complete, detailed, cohesive, attainable, testable requirements that are agreed to by all players. Use prototypes to help nail down requirements.
•realistic schedules - allow adequate time for planning, design, testing, bug fixing, re-testing, changes, and documentation; personnel should be able to complete the project without burning out.
•adequate testing - start testing early on, re-test after fixes or changes, plan for adequate time for testing and bug-fixing.
•stick to initial requirements as much as possible - be prepared to defend against changes and additions once development has begun, and be prepared to explain consequences. If changes are necessary, they should be adequately reflected in related schedule changes. If possible, use rapid prototyping during the design phase so that customers can see what to expect. This will provide them a higher comfort level with their requirements decisions and minimize changes later on.
•communication - require walkthroughs and inspections when appropriate; make extensive use of group communication tools - e-mail, groupware, networked bug- tracking tools and change management tools, intranet capabilities, etc.; insure that documentation is available and up-to-date - preferably electronic, not paper; promote teamwork and cooperation; use protoypes early on so that customers' expectations are clarified.
26.What is software 'quality'?
Quality software is reasonably bug-free, delivered on time and within budget, meets requirements and/or expectations, and is maintainable. However, quality is obviously a subjective term. It will depend on who the 'customer' is and their overall influence in the scheme of things. A wide-angle view of the 'customers' of a software development project might include end-users, customer acceptance testers, customer contract officers, customer management, the development organization's management/accountants/testers/salespeople, future software maintenance engineers, stockholders, magazine columnists, etc. Each type of 'customer' will have their own slant on 'quality' - the accounting department might define quality in terms of profits while an end-user might define quality as user-friendly and bug-free.
27.What is 'good code'?
'Good code' is code that works, is bug free, and is readable and maintainable. Some organizations have coding 'standards' that all developers are supposed to adhere to, but everyone has different ideas about what's best, or what is too many or too few rules. There are also various theories and metrics, such as McCabe Complexity metrics. It should be kept in mind that excessive use of standards and rules can stifle productivity and creativity. 'Peer reviews', 'buddy checks' code analysis tools, etc. can be used to check for problems and enforce standards.
For C and C++ coding, here are some typical ideas to consider in setting rules/standards; these may or may not apply to a particular situation:
•minimize or eliminate use of global variables.
•use descriptive function and method names - use both upper and lower case, avoid abbreviations, use as many characters as necessary to be adequately descriptive (use of more than 20 characters is not out of line); be consistent in naming conventions.
•use descriptive variable names - use both upper and lower case, avoid abbreviations, use as many characters as necessary to be adequately descriptive (use of more than 20 characters is not out of line); be consistent in naming conventions.
•function and method sizes should be minimized; less than 100 lines of code is good, less than 50 lines is preferable.
•function descriptions should be clearly spelled out in comments preceding a function's code.
•organize code for readability.
•use whitespace generously - vertically and horizontally
•each line of code should contain 70 characters max.
•one code statement per line.
•coding style should be consistent throught a program (eg, use of brackets, indentations, naming conventions, etc.)
•in adding comments, err on the side of too many rather than too few comments; a common rule of thumb is that there should be at least as many lines of comments (including header blocks) as lines of code.
•no matter how small, an application should include documentaion of the overall program function and flow (even a few paragraphs is better than nothing); or if possible a separate flow chart and detailed program documentation.
•make extensive use of error handling procedures and status and error logging.
•for C++, to minimize complexity and increase maintainability, avoid too many levels of inheritance in class heirarchies (relative to the size and complexity of the application). Minimize use of multiple inheritance, and minimize use of operator overloading (note that the Java programming language eliminates multiple inheritance and operator overloading.)
• for C++, keep class methods small, less than 50 lines of code per method is preferable.
• for C++, make liberal use of exception handlers
28.What is 'good design'?
'Design' could refer to many things, but often refers to 'functional design' or 'internal design'. Good internal design is indicated by software code whose overall structure is clear, understandable, easily modifiable, and maintainable; is robust with sufficient error-handling and status logging capability; and works correctly when implemented. Good functional design is indicated by an application whose functionality can be traced back to customer and end-user requirements.For programs that have a user interface, it's often a good idea to assume that the end user will have little computer knowledge and may not read a user manual or even the on-line help; some common rules-of-thumb include:
• the program should act in a way that least surprises the user
• it should always be evident to the user what can be done next and how to exit
• the program shouldn't let the users do something stupid without warning them.
29.What is SEI? CMM? ISO? IEEE? ANSI? Will it help?
•SEI = 'Software Engineering Institute' at Carnegie-Mellon University; initiated by the U.S. Defense Department to help improve software development processes.
•CMM = 'Capability Maturity Model', developed by the SEI. It's a model of 5 levels of organizational 'maturity' that determine effectiveness in delivering quality software. It is geared to large organizations such as large U.S. Defense Department contractors. However, many of the QA processes involved are appropriate to any organization, and if reasonably applied can be helpful. Organizations can receive CMM ratings by undergoing assessments by qualified auditors.
Level 1 - characterized by chaos, periodic panics, and heroic
efforts required by individuals to successfully
complete projects. Few if any processes in place;
successes may not be repeatable.
Level 2 - software project tracking, requirements management,
realistic planning, and configuration management
processes are in place; successful practices can
be repeated.
Level 3 - standard software development and maintenance processes
are integrated throughout an organization; a Software
Engineering Process Group is is in place to oversee
software processes, and training programs are used to
ensure understanding and compliance.
Level 4 - metrics are used to track productivity, processes,
and products. Project performance is predictable,
and quality is consistently high.
Level 5 - the focus is on continouous process improvement. The
impact of new processes and technologies can be
predicted and effectively implemented when required.
Perspective on CMM ratings: During 1997-2001, 1018 organizations
were assessed. Of those, 27% were rated at Level 1, 39% at 2,
23% at 3, 6% at 4, and 5% at 5. (For ratings during the period
1992-96, 62% were at Level 1, 23% at 2, 13% at 3, 2% at 4, and
0.4% at 5.) The median size of organizations was 100 software
engineering/maintenance personnel; 32% of organizations were
U.S. federal contractors or agencies. For those rated at
Level 1, the most problematical key process area was in
Software Quality Assurance.
• ISO = 'International Organisation for Standardization' - The ISO 9001:2000 standard (which replaces the previous standard of 1994) concerns quality systems that are assessed by outside auditors, and it applies to many kinds of production and manufacturing organizations, not just software. It covers documentation, design, development, production, testing, installation, servicing, and other processes. The full set of standards consists of: (a)Q9001-2000 - Quality Management Systems: Requirements; (b)Q9000-2000 - Quality Management Systems: Fundamentals and Vocabulary; (c)Q9004-2000 - Quality Management Systems: Guidelines for Performance Improvements. To be ISO 9001 certified, a third-party auditor assesses an organization, and certification is typically good for about 3 years, after which a complete reassessment is required. Note that ISO certification does not necessarily indicate quality products - it indicates only that documented processes are followed.
• IEEE = 'Institute of Electrical and Electronics Engineers' - among other things, creates standards such as 'IEEE Standard for Software Test Documentation' (IEEE/ANSI Standard 829), 'IEEE Standard of Software Unit Testing (IEEE/ANSI Standard 1008), 'IEEE Standard for Software Quality Assurance Plans' (IEEE/ANSI Standard 730), and others.
• ANSI = 'American National Standards Institute', the primary industrial standards body in the U.S.; publishes some software-related standards in conjunction with the IEEE and ASQ (American Society for Quality).
• Other software development process assessment methods besides CMM and ISO 9000 include SPICE, Trillium, TickIT. and Bootstrap.
30. What is the 'software life cycle'?
The life cycle begins when an application is first conceived and ends when it is no longer in use. It includes aspects such as initial concept, requirements analysis, functional design, internal design, documentation planning, test planning, coding, document preparation, integration, testing, maintenance, updates, retesting, phase-out, and other aspects
31.Will automated testing tools make testing easier?
•Possibly. For small projects, the time needed to learn and implement them may not be worth it. For larger projects, or on-going long-term projects they can be valuable.
•A common type of automated tool is the 'record/playback' type. For example, a tester could click through all combinations of menu choices, dialog box choices, buttons, etc. in an application GUI and have them 'recorded' and the results logged by a tool. The 'recording' is typically in the form of text based on a scripting language that is interpretable by the testing tool. If new buttons are added, or some underlying code in the application is changed, etc. the application can then be retested by just 'playing back' the 'recorded' actions, and comparing the logging results to check effects of the changes. The problem with such tools is that if there are continual changes to the system being tested, the 'recordings' may have to be changed so much that it becomes very time-consuming to continuously update the scripts. Additionally, interpretation of results (screens, data, logs, etc.) can be a difficult task. Note that there are record/playback tools for text-based interfaces also, and for all types of platforms.
•Other automated tools can include:
code analyzers - monitor code complexity, adherence to
standards, etc.
coverage analyzers - these tools check which parts of the
code have been exercised by a test, and may
be oriented to code statement coverage,
condition coverage, path coverage, etc.
memory analyzers - such as bounds-checkers and leak detectors.
load/performance test tools - for testing client/server
and web applications under various load
levels.
web test tools - to check that links are valid, HTML code
usage is correct, client-side and
server-side programs work, a web site's
interactions are secure.
other tools - for test case management, documentation
management, bug reporting, and configuration
management.

32.What makes a good test engineer?
A good test engineer has a 'test to break' attitude, an ability to take the point of view of the customer, a strong desire for quality, and an attention to detail. Tact and diplomacy are useful in maintaining a cooperative relationship with developers, and an ability to communicate with both technical (developers) and non-technical (customers, management) people is useful. Previous software development experience can be helpful as it provides a deeper understanding of the software development process, gives the tester an appreciation for the developers' point of view, and reduce the learning curve in automated test tool programming. Judgement skills are needed to assess high-risk areas of an application on which to focus testing efforts when time is limited.
33.What makes a good Software QA engineer?
The same qualities a good tester has are useful for a QA engineer. Additionally, they must be able to understand the entire software development process and how it can fit into the business approach and goals of the organization. Communication skills and the ability to understand various sides of issues are important. In organizations in the early stages of implementing QA processes, patience and diplomacy are especially needed. An ability to find problems as well as to see 'what's missing' is important for inspections and reviews.
34.What makes a good QA or Test manager?
A good QA, test, or QA/Test(combined) manager should:
• be familiar with the software development process
• be able to maintain enthusiasm of their team and promote a positive atmosphere, despite what is a somewhat 'negative' process (e.g., looking for or preventing problems)
• be able to promote teamwork to increase productivity
• be able to promote cooperation between software, test, and QA engineers
• have the diplomatic skills needed to promote improvements in QA processes
• have the ability to withstand pressures and say 'no' to other managers when quality is insufficient or QA processes are not being adhered to
• have people judgement skills for hiring and keeping skilled personnel
• be able to communicate with technical and non-technical people, engineers, managers, and customers.
• be able to run meetings and keep them focused
35.What's the role of documentation in QA?
Critical. (Note that documentation can be electronic, not necessarily paper.) QA practices should be documented such that they are repeatable. Specifications, designs, business rules, inspection reports, configurations, code changes, test plans, test cases, bug reports, user manuals, etc. should all be documented. There should ideally be a system for easily finding and obtaining documents and determining what documentation will have a particular piece of information. Change management for documentation should be used if possible.
36.What's the big deal about 'requirements'?
One of the most reliable methods of insuring problems, or failure, in a complex software project is to have poorly documented requirements specifications. Requirements are the details describing an application's externally-perceived functionality and properties. Requirements should be clear, complete, reasonably detailed, cohesive, attainable, and testable. A non-testable requirement would be, for example, 'user-friendly' (too subjective). A testable requirement would be something like 'the user must enter their previously-assigned password to access the application'. Determining and organizing requirements details in a useful and efficient way can be a difficult effort; different methods are available depending on the particular project. Many books are available that describe various approaches to this task.
Care should be taken to involve ALL of a project's significant 'customers' in the requirements process. 'Customers' could be in-house personnel or out, and could include end-users, customer acceptance testers, customer contract officers, customer management, future software maintenance engineers, salespeople, etc. Anyone who could later derail the project if their expectations aren't met should be included if possible.
Organizations vary considerably in their handling of requirements specifications. Ideally, the requirements are spelled out in a document with statements such as 'The product shall.....'. 'Design' specifications should not be confused with 'requirements'; design specifications should be traceable back to the requirements.
In some organizations requirements may end up in high level project plans, functional specification documents, in design documents, or in other documents at various levels of detail. No matter what they are called, some type of documentation with detailed requirements will be needed by testers in order to properly plan and execute tests. Without such documentation, there will be no clear-cut way to determine if a software application is performing correctly.
37.What steps are needed to develop and run software tests?
The following are some of the steps to consider:
•Obtain requirements, functional design, and internal design specifications and other necessary documents
•Obtain budget and schedule requirements
•Determine project-related personnel and their responsibilities, reporting requirements, required standards and processes (such as release processes, change processes, etc.)
•Identify application's higher-risk aspects, set priorities, and determine scope and limitations of tests
•Determine test approaches and methods - unit, integration, functional, system, load, usability tests, etc.
•Determine test environment requirements (hardware, software, communications, etc.)
•Determine testware requirements (record/playback tools, coverage analyzers, test tracking, problem/bug tracking, etc.)
•Determine test input data requirements
•Identify tasks, those responsible for tasks, and labor requirements
•Set schedule estimates, timelines, milestones
•Determine input equivalence classes, boundary value analyses, error classes
•Prepare test plan document and have needed reviews/approvals
•Write test cases
•Have needed reviews/inspections/approvals of test cases
•Prepare test environment and testware, obtain needed user manuals/reference documents/configuration guides/installation guides, set up test tracking processes, set up logging and archiving processes, set up or obtain test input data
•Obtain and install software releases
•Perform tests
•Evaluate and report results
•Track problems/bugs and fixes
•Retest as needed
•Maintain and update test plans, test cases, test environment, and testware through life cycle
38.What's a 'test plan'?
A software project test plan is a document that describes the objectives, scope, approach, and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the test group understand the 'why' and 'how' of product validation. It should be thorough enough to be useful but not so thorough that no one outside the test group will read it. The following are some of the items that might be included in a test plan, depending on the particular project:
•Title
•Identification of software including version/release numbers
•Revision history of document including authors, dates, approvals
•Table of Contents
•Purpose of document, intended audience
•Objective of testing effort
•Software product overview
•Relevant related document list, such as requirements, design documents, other test plans, etc.
•Relevant standards or legal requirements
•Traceability requirements
•Relevant naming conventions and identifier conventions
•Overall software project organization and personnel/contact-info/responsibilties
•Test organization and personnel/contact-info/responsibilities
•Assumptions and dependencies
•Project risk analysis
•Testing priorities and focus
•Scope and limitations of testing
•Test outline - a decomposition of the test approach by test type, feature, functionality, process, system, module, etc. as applicable
•Outline of data input equivalence classes, boundary value analysis, error classes
•Test environment - hardware, operating systems, other required software, data configurations, interfaces to other systems
•Test environment validity analysis - differences between the test and production systems and their impact on test validity.
•Test environment setup and configuration issues
•Software migration processes
•Software CM processes
•Test data setup requirements
•Database setup requirements
•Outline of system-logging/error-logging/other capabilities, and tools such as screen capture software, that will be used to help describe and report bugs
•Discussion of any specialized software or hardware tools that will be used by testers to help track the cause or source of bugs
•Test automation - justification and overview
•Test tools to be used, including versions, patches, etc.
•Test script/test code maintenance processes and version control
•Problem tracking and resolution - tools and processes
•Project test metrics to be used
•Reporting requirements and testing deliverables
•Software entrance and exit criteria
•Initial sanity testing period and criteria
•Test suspension and restart criteria
•Personnel allocation
•Personnel pre-training needs
•Test site/location
•Outside test organizations to be utilized and their purpose, responsibilties, deliverables, contact persons, and coordination issues
•Relevant proprietary, classified, security, and licensing issues.
•Open issues
39.What's a 'test case'?
•A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly. A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results.
•Note that the process of developing test cases can help find problems in the requirements or design of an application, since it requires completely thinking through the operation of the application. For this reason, it's useful to prepare test cases early in the development cycle if possible.
40.What should be done after a bug is found?
The bug needs to be communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be re-tested, and determinations made regarding requirements for regression testing to check that fixes didn't create problems elsewhere. If a problem-tracking system is in place, it should encapsulate these processes. A variety of commercial problem-tracking/management software tools are available
The following are items to consider in the tracking process:
•Complete information such that developers can understand the bug, get an idea of it's severity, and reproduce it if necessary.
•Bug identifier(number,ID,etc.)
•Current bug status (e.g., 'Released for Retest', 'New', etc.)
•The application name or identifier and version
•The function, module, feature, object, screen, etc. where the bug occurred
•Environment specifics, system, platform, relevant hardware specifics
•Test case name/number/identifier
•One-line bug description
•Full bug description
•Description of steps needed to reproduce the bug if not covered by a test case or if the developer doesn't have easy access to the test case/test script/test tool
•Names and/or descriptions of file/data/messages/etc. used in test
•File excerpts/error messages/log file excerpts/screen shots/test tool logs that would be helpful in finding the cause of the problem
•Severity estimate (a 5-level range such as 1-5 or 'critical'-to-'low' is common)
•Was the bug reproducible?
•Tester name
•Test date
•Bug reporting date
•Name of developer/group/organization the problem is assigned to
•Description of problem cause
•Description of fix
•Code section/file/module/class/method that was fixed
•Date of fix
•Application version that contains the fix
•Tester responsible for retest
•Retest date
•Retest results
•Regression testing requirements
•Tester responsible for regression tests
•Regression testing results
A reporting or tracking process should enable notification of appropriate personnel at various stages. For instance, testers need to know when retesting is needed, developers need to know when bugs are found and how to get the needed information, and reporting/summary capabilities are needed for managers.