Thư viện tri thức trực tuyến
Kho tài liệu với 50,000+ tài liệu học thuật
© 2023 Siêu thị PDF - Kho tài liệu học thuật hàng đầu Việt Nam

Testing Computer Software phần 10 doc
Nội dung xem thử
Mô tả chi tiết
254
• Test all data files, including clip art, templates, tutorials, samples, etc. Testers almost always
underestimate the time this takes. Try a small, representative group of files and clock your time. Work
out the average and multiply it by the number of files to check. This gives you the time required for
one cycle of testing of these files. You must also estimate the time required to retest revised files.
Make the testing status clear and get Problem Report issues resolved:
• Circulate summary and status reports that summarize open problems and provide various project
statistics. You have probably already been circulating reports like these, but as the project
progresses there are probably more reports, prepared more formally, and circulated to more senior
people in the company.
• Use good sense with statistics. Don't treat the number of open reports and newly reported problems
as meaningful and important without further interpretation. This late in the schedule, senior manage
ment will believe you (or act as if they believe you to put pressure on the project manager). These
numbers convey false impressions. For more discussion, see Chapter 6, "Users of the tracking system:
Senior managers."
• Be careful when you add testers near the end of the project A new tester who writes a stack of reports
that essentially say, "this Amiga program should follow Macintosh user interface rules" is wasting
valuable last minute time. Late-joining testers who combine enthusiasm, poor judgment, and obsti
nacy can cost a project much more than they benefit it.
• Circulate lists of deferred problems and call or participate in meetings to review the deferrals. By
beta, or soon after, these meetings should be weekly. Later they might be every few days. It's
important to get these decisions considered now, rather than a day or two before shipping the product.
Highlight any reports you want reconsidered—pick your appeals carefully.
• Circulate a Hit of open user interface design issues and call or join in a review meeting before the UI
freeze. You have no business asking for reconsideration of design decisions after the freeze if you had
the opportunity to ask before the freeze.
Review the manuals thoroughly as you get them. For drafts issued before beta, do all this testing before beta
too. For more discussion of documentation testing, read Chapter 9:
• You are probably more familiar with detail changes and late design changes than the writer, so make
a point of checking that the manual is up to date.
• Warn the writer of impending probable changes to the program.
• Look for features that aren 't explained, or not explained clearly enough, or not in enough detail.
• On a multi-tester project, have each new tester stroke the latest version of the manual (check every
word of it against the program). This should usually be their first testing task. In the best case, on a
255
moderately large project, new testers join the project from mid-alpha until just before the UI freeze.
If so, each draft of the manual will be reviewed in depth by a tester who has never read it before, as
well as by someone familiar with it.
Continue measuring progress against the testing milestones you published at alpha. Check your progress
every week. Is your testing team running ahead or behind? What new tasks have you taken on, how much time
are they taking, and how do they affect the work you planned to get done? If you are running behind, or if
you've added lots of new work, what are you going to do? Can you eliminate or reduce some tasks? Do you
need more staff? Or is the programming schedule slipping so far anyway that your slippage doesn't matter?
Beware of the excuse that the programmers are so far behind that they're driving the schedule delays, not
you. Every tester and test manager believes this about their projects, when the schedule goes bad, but that
doesn't mean they're right:
• If you fall behind in testing, you will find bugs later that you could have found sooner. If you keep
finding errors that were in the program many versions ago, which could have been found and fixed
many versions ago, then part of the reason the program isn't ready to ship is that you're taking too long
to find the bugs.
• If you push yourself and your test team too hard, your reports will be harder to read and reproduce,
they'll include less investigation and simplification, and they'll take the programmers longer to fix.
• Bugs that live on and on in aproject may reflect poor test reporting. If they do, it's partially your fault
when there's a late delay when the project manager finally realizes that what you're
talking about is a serious problem, and the programmer finally figures out (or you
finally show him) how to reproduce the problem, so you all take time out to fix and
retest it.
Make sure that you're covering the program at a pace you should consider reasonable, and
reporting problems in a way you should consider responsible.
OUTSIDE BETA TESTS
We need feedback from customers before shipping a product. But we often try to get too
much from too few people at the wrong times, using the wrong type of test. The common
problem of beta testing is that the test planners
don't think through their objectives precisely
enough. What is the point of running the test if you
won't have time to respond to what you learn?
What types of information do you expect from this
test and why can't you get them just as well from
in-house testing? How will you know whether
these outsiders have done the testing you wanted
them to do?
One reason behind the confusion is that there
are at least seven distinct classes of end user tests
that we call beta tests. Figure 3.5 shows the objectives that drive these seven classes.
256
• Expert consulting: early in development, marketing or the project manager may talk with experts
about the product vision and perhaps about a functional prototype. The goal is to determine how they
like the overall product concept, what they think it needs, and what changes will make it more usable
or competitive.
Some companies get caught up in an idea that they shouldn't show outsiders anything until "beta",
some late stage in development. After beta, the experts are consulted. By then it's too late to make the
kinds of fundamental changes they request, so everybody gets frustrated.
If you're going to use experts, use them early.
• Magazine reviewers: some reviewers love to suggest changes and save their best reviews for products
they were successful in changing. To them, you have to send early copies of the program. To others,
who want to evaluate final product without changing it, you want to send very late copies. You won't
expect feedback from them, apart from last-minute bug discoveries, and no one should expect the
programmers to make late design changes in response to design feedback from these late version
reviewers. There's no time in the schedule to even evaluate their design feedback.
The marketing department must decide, on a case-by-case basis, who gets early code and who gets
it late.
• Testimonials might also be important for advertising. Again, marketing manages the flow of product
to these people. Some get code early and get to feel that they contributed to the design. Others get
almost-final code and can't contribute to the design.
• Profiling customer uses and polishing the design: it might be important to put almost-final product
in the hands of representative customers and see how they actually use it. Their experience might
influence the positioning of the product in initial advertising. Or their feedback might be needed to
seek and smooth out rough edges in the product's design. To be of value, this type of test might leave
preliminary product in customer hands for a month or more, to let them gain experience with the
program. To allow time for polish to be implemented, in response to these customer results, you might
need another month (or more).
People often say that they do beta testing to find out how customers will use the product and to respond
to the problems these sample customers raise. If you want any hope of success of this type of testing,
budget at least 10 weeks, preferably more, between the start of this testing and the release of final
product to manufacturing.
• Finding bugs: Rather than using outside beta testers to look for functionality issues, argue for
bringing in members of your target market to evaluate the program and its documentation. You can
watch these people. You're paying them for this, so you can make sure they test for the desired number