Siêu thị PDFTải ngay đi em, trời tối mất

Thư viện tri thức trực tuyến

Kho tài liệu với 50,000+ tài liệu học thuật

© 2023 Siêu thị PDF - Kho tài liệu học thuật hàng đầu Việt Nam

Testing Computer Software phần 2 docx
MIỄN PHÍ
Số trang
26
Kích thước
334.2 KB
Định dạng
PDF
Lượt xem
1592

Testing Computer Software phần 2 docx

Nội dung xem thử

Mô tả chi tiết

26

THE TESTER'S OBJECTIVE: PROGRAM VERIFICATION?

THE PROGRAM DOESN'T WORK CORRECTLY

Public and private bugs

At this rate, if your programming language allows one executable statement

per line, you make 150 errors while writing a 100 line program.

Most programmers catch and fix more than 99% of their mistakes before releasing a program for testing.

Having found so many, no wonder they think they must have found the lot. But they haven't. Your job is to

find the remaining 1%.

IS TESTING A FAILURE IF THE PROGRAM DOESN'T WORK CORRECTLY?

Is the tester doing a good job or a bad job when she proves that the program is full of bugs? If the purpose of

testing is to verify that the program works correctly, then this tester is failing to achieve her purpose. This

should sound ridiculous. Obviously, this is very successful testing.

Ridiculous as it seems, we have seen project managers berate testers for continuing to find errors in a

program that's behind schedule. Some blame the testers for the bugs. Others just complain, often in a joking

tone: "the testers are too tough on the program. Testers aren't supposed to find bugs—they're supposed to

prove the program is OK, so the company can ship it." This is a terrible attitude, but it comes out under

pressure. Don't be confused when you encounter it Verification of goodness is amediocre project manager's

fantasy, not your task.

TESTERS SHOULDN'T WANT TO VERIFY THAT A PROGRAM RUNS CORRECTLY

If you think your task is to find problems, you will look harder for them than if you think your task is to verify

that the program has none (Myers, 1979). It is a standard finding in psychological research that people tend

to see what they expect to see. For example, proofreading is so hard because you expect to see words spelled

correctly. Your mind makes the corrections automatically.

Even in making judgments as basic as whether you saw something, your expectations and motivation

influence what you see and what you report seeing. For example, imagine participating in the following

experiment, which is typical of signal detectability research (Green & Swets, 1966). Watch a radar screen

and look for a certain blip. Report the blip whenever you see it. Practice hard. Make sure you know what to

look for. Pay attention. Try to be as accurate as possible. If you expect to see many blips, or if you get a big

reward for reporting blips when you see them, you'll see and report more of them—including blips that

weren't there ("false alarms"). If you believe there won't be many blips, or if you're punished for false

alarms, you'll miss blips that did appear on the screen ("misses").

It took experimental psychologists about 80 years of bitter experience to stop blaming experimental

subjects for making mistakes in these types of experiments and realize that the researcher's own attitude and

experimental setup had a big effect on the proportions of false alarms and misses.

27

If you expect to find many bugs, and you're praised or rewarded for finding them, you'll find plenty. A few

will be false alarms. If you expect the program to work correctly, or if people complain when you find

problems and punish you for false alarms, you'll miss many real problems.

Another distressing finding is that trained, conscientious, intelligent experimenters

unconsciously bias their tests, avoid running experiments that might cause trouble for their

theories, misanalyze, misinterpret, and ignore test results that show their ideas are wrong

(Rosenthal, 1966).

If you want and expect a program to work, you will be more

likely to see a working program—you will miss failures. If you

expect it to fail, you 'II be more likely to see the problems. If

you are punished for reporting failures, you will miss failures.

You won't only fail to report them—you will not notice them.

You will do your best work if you think of your task as proving that the program is no good. You are well

advised to adopt a thoroughly destructive attitude toward the program. You should want it to fail, you should

expect it to fail, and you should concentrate on finding test cases that show its failures.

This is a harsh attitude. It is essential.

SO, WHY TEST?

You can't find all the bugs. You can't prove the program correct, and you don't want to. It's expensive,

frustrating, and it doesn't win you any popularity contests. So, why bother testing?

THE PURPOSE OF TESTING A PROGRAM IS TO FIND PROBLEMS IN IT

Finding problems is the core of your work. You should want to find as many as possible; the more serious the

problem, the better.

Since you will run out of time before running out of test cases, it is essential to use the time available as

efficiently as possible. Chapters 7,8,12, and 13 consider priorities in detail. The guiding principle can be put

simply:

A test that reveals a problem is a success. A test that did not reveal a

problem was a waste of time.

Consider the following analogy, from Myers (1979). Suppose that something's wrong with you. You go

to a doctor. He's supposed to run tests, find out what's wrong, and recommend corrective action. He runs test

after test after test. At the end of it all, he can't find anything wrong. Is he a great tester or an incompetent

diagnostician? If you really are sick, he's incompetent, and all those expensive tests were a waste of time,

money, and effort. In software, you're the diagnostician. The program is the (assuredly) sick patient.

Tải ngay đi em, còn do dự, trời tối mất!