Siêu thị PDFTải ngay đi em, trời tối mất

Thư viện tri thức trực tuyến

Kho tài liệu với 50,000+ tài liệu học thuật

© 2023 Siêu thị PDF - Kho tài liệu học thuật hàng đầu Việt Nam

Tài liệu Java Testing and Design- P9 doc
PREMIUM
Số trang
50
Kích thước
737.7 KB
Định dạng
PDF
Lượt xem
1168

Tài liệu Java Testing and Design- P9 doc

Nội dung xem thử

Mô tả chi tiết

379

Chapter

12

Turning Test Agent Results

into Actionable Knowledge

he stock trading information system in Chapter 11 presents a methodol￾ogy, infrastructure, software design, and protocol design to implement a

Web-enabled application with great scalability, reliability, and functionality.

We designed user archetypes, wrote multiprotocol intelligent test agents, and

made requests to an application host. First, we checked for the correct func￾tional results, then we checked the host’s ability to serve increasing numbers

of concurrent users. All of this activity provides a near-production experience

from which we can uncover scalability problems, concurrency problems, and

reliability problems. It also usually generates a huge amount of logged data.

Looking into the logged data allows us to see many immediate problems

with the Web-enabled application under test. The log file is one of many

places you can observe problems and find places to optimize the Web￾enabled application. This chapter shows how to understand and analyze a

Web-enabled application while the test is running and how to analyze the

results data after the test is finished. With the method presented in this chap￾ter, you will be able to demonstrate the system’s ability to achieve scalability,

reliability, and functionality.

Chapter 11 took the techniques presented in earlier chapters to command

services over a variety of protocols (HTTP, HTTPS, SOAP, XML-RPC) and

build a test modeled after a set of user archetypes. It presented the test goals,

user archetypes, and test agents for an example stock trading firm. The mas￾ter component of the test handled configuration, test agent thread creation,

T

PH069-Cohen.book Page 379 Monday, March 15, 2004 9:00 AM

Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.

380 Chapter 12 Turning Test Agent Results into Actionable Knowledge

and recorded the results to a special log file. This chapter explains how to

turn the recorded results into actionable knowledge.

What to Expect from Results Analysis

Chapter 11 ended with a strong word of caution. You may be tempted to con￾duct a cursory review of test results for actionable knowledge. In this regard

Alexander Pope had it right when he wrote: “A little learning is a dangerous

thing; Drink deep, or taste not the Pierian spring.” Thoroughly analyzing test

results produces actionable knowledge, whereas looking only at the surface

of the test result data can lead to terrible problems for yourself, your com￾pany, and your project. So, before showing how to analyze the test result data

generated from the intelligent test agents in the previous chapter, this section

presents what we can reasonably expect to uncover from conducting a test.

Results data provides actionable knowledge, but the meaning may be con￾tingent on your role in the software process. Consider the following tests and

how the actionable knowledge changes depending on who is running the test.

Table 12–1 describes this in detail.

In each case, the same intelligent test agents may stage a test but the

results log is analyzed to find different actionable knowledge. For example,

Table 12–1 Actionable Knowledge Changes Depending on Who is Running the Test

Activity Test Who Actionable knowledge

A software

developer writes

a new function

Functional test Developer Determines that the func￾tion works and the new

module is ready for testing.

Delivery of new

software build

Scalability and

concurrency test

QA technician Identifies optimization pos￾sibilities to improve perfor￾mance and reduce resource

needs (CPU, disk, mem￾ory, database).

Production

servers upgraded

Rollout test IT manager Determines when the data￾center infrastructure is

capable of serving fore￾casted user levels.

PH069-Cohen.book Page 380 Monday, March 15, 2004 9:00 AM

Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.

What to Expect from Results Analysis 381

when a QA analyst looks at the log file on a server undergoing a scalability

and concurrency test, the analyst will be looking for log entries that indicate

when a thread becomes deadlocked because it is waiting on resources from

another thread. The developer looking at the same results log would be satis￾fied that the module under test functioned. Therefore, a starting point in

analyzing results is to understand the goals of the test and see how the goals

can be translated to results.

Following are a few test goals and how the goals may be translated to

actionable results.

Goal: Our New Web Site Needs to Handle Peak Loads of

50 Concurrent Users

Imagine a company Web site redesign that added several custom functions.

Each function is driven by a Java servlet. The goal identifies the forecasted

total number of concurrent users. The definition for concurrency is covered

later in this chapter. For the moment concurrency means the state where two

or more people request a function at the same time.

One technique to translate the goal into an actionable result is to look at

the goal in reverse. For example, how would we know when the system is not

able to handle 50 concurrent users? Imagine running multiple copies of an

intelligent test agent concurrently for multiple periods of time. Each test

period increases the number of concurrently running agents. As the test

agents run, system resources (CPU time, disk space, memory) are used and

the overall performance of the Web-enabled application slows. The logged

results will show that the total number of transactions decreases as more con￾current agents run.

Charting the results enables us to set criteria for acceptable performance

under peak loads. For example, at 100 concurrent test agents the total num￾ber of transactions completed might be three times smaller than when 50

concurrent test agents are run. Charting the transactions completed under an

increasing number of concurrent test agents enables us to pick a number

between 50 and 100 concurrent test agents where system throughput is still

acceptable.

Goal: The Web Site Registration Page Needs to Work

Flawlessly

Imagine a company that promotes a new fiction book. The company Web site

provides a Web page for prospective customers to register to receive

PH069-Cohen.book Page 381 Monday, March 15, 2004 9:00 AM

Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.

382 Chapter 12 Turning Test Agent Results into Actionable Knowledge

announcements when the book is published. A simple HTML form enables

prospective customers to enter their contact information, including their

email address. A Microsoft ASP.NET object serves the HTML form. When a

user posts his or her contact information, the ASP.NET object needs to record

the information to a database and redirect the user to a download page.

This reminds me of a time I waited for a phone call from a woman I

invited out to dinner on a date. The longer I waited, the more I thought the

phone might not be working. To my chagrin, I lift the phone receiver and

find that the phone was indeed working. Doing so, of course, prevented her

call from getting through to me. The analog to this is testing an HTML form.

Until you actually click the submit button in a browser interface, you don’t

really know that the server is working. Yet, clicking the button causes the

server to do actual work for you that takes resources away from real users.

One technique to translate the goal into an actionable result is to under￾stand the duration of the goal. Consider that the only way to know that the

HTML form and ASP.NET object are working flawlessly is use them. And

each time they are used and perform correctly we have met the goal. So how

long do you keep testing to achieve the goal of “flawless performance”?

Understanding the goal of the test can be translated into a ratio of successes

to failures.

The service achieves the goal when the ratio of successful tests of the

HTML form and the ASP.NET object exceed by a set among the tests that

failed. For example, over a period of 24 hours the goal is achieved if the ratio

of successful tests to tests with failures always exceeds 95%. Searching the

logged results for the ratio is fairly straightforward. Alrighty then!

Goal: Customer Requests for Month-End Reports Must

Not Slow Down the Order-Entry Service

A common system architecture practice puts a load-balanced group of appli￾cation servers in front of a single database server. Imagine the application

server providing two types of functions: one function uses many database

queries to produce a month-end sales report to salespeople and the second

uses database insert commands to enter new orders into the database. In a

Web environment both types of functions may be used at the same time.

One technique to translate the goal into an actionable result is to look at

the nature of the goal. When the goal speaks of multiple concurrent activi￾ties, then an actionable result provides feedback to tune the application. The

tuning shows system performance when the ratio of activity types changes.

PH069-Cohen.book Page 382 Monday, March 15, 2004 9:00 AM

Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.

The Big Five Problem Patterns 383

In this example, the goal betrays that the system slows down toward the

end of the month as customers increasingly request database query-intensive

reports. In this case the goal can be translated into actionable results by using

a combination of two test agents: one agent requests month-end reports and

the second places orders. Testing the system with 100 total agents and a

changing mix of test agents shows a ratio of overall system performance to

the mix of agents. For example, with 60 agents requesting month-end reports

and 40 agents placing orders, the system performed twice as fast as with 80

agents requesting month-end reports and 20 agents placing orders. The

changing mix of agent types and its impact of overall performance makes it

possible to take action by optimizing the database and improving computing

capacity with more equipment.

Goal Summary

The examples and goals presented here are meant to show you a way to think

through the goals to determine a course of action to get actionable knowl￾edge from test results. Many times simple statistics from logged results are

presented. While these statistics might look pretty, the actionable knowledge

from the test results is the true goal you are after.

The Big Five Problem Patterns

Looking through raw logged results data often gives a feeling of staring up at

the stars on a cold, clear winter night. The longer one looks into the stars, the

more patterns emerge. In testing Web-enabled applications for scalability,

performance, and reliability, five patterns emerge to identify problems and

point to solutions.

Resource Problems

While there may be new software development techniques on the way,

today’s Web-enabled application software is built on a “just-in-time” architec￾ture. An application responds to requests the moment it receives the request.

Web-enabled applications written to run on a host typically wait until a given

resource (CPU bandwidth, disk space, memory, network bandwidth)

becomes available. This is a major cause for application latency.

At any moment the hosting server must be able to provide the needed

resource to the application, including resources from Web service hosts. The

PH069-Cohen.book Page 383 Monday, March 15, 2004 9:00 AM

Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.

Tải ngay đi em, còn do dự, trời tối mất!