Siêu thị PDFTải ngay đi em, trời tối mất

Thư viện tri thức trực tuyến

Kho tài liệu với 50,000+ tài liệu học thuật

© 2023 Siêu thị PDF - Kho tài liệu học thuật hàng đầu Việt Nam

Tài liệu Advanced PHP Programming- P6 ppt
MIỄN PHÍ
Số trang
50
Kích thước
510.9 KB
Định dạng
PDF
Lượt xem
994

Tài liệu Advanced PHP Programming- P6 ppt

Nội dung xem thử

Mô tả chi tiết

228 Chapter 9 External Performance Tunings

Pre-Fork, Event-Based, and Threaded Process Architectures

The three main architectures used for Web servers are pre-fork, event-based, and threaded models.

In a pre-fork model, a pool of processes is maintained to handle new requests. When a new request comes

in, it is dispatched to one of the child processes for handling. A child process usually serves more than one

request before exiting. Apache 1.3 follows this model.

In an event-based model, a single process serves requests in a single thread, utilizing nonblocking or asyn￾chronous I/O to handle multiple requests very quickly. This architecture works very well for handling static

files but not terribly well for handling dynamic requests (because you still need a separate process or thread

to the dynamic part of each request). thttpd, a small, fast Web server written by Jef Poskanzer, utilizes

this model.

In a threaded model, a single process uses a pool of threads to service requests. This is very similar to a pre￾fork model, except that because it is threaded, some resources can be shared between threads. The Zeus

Web server utilizes this model. Even though PHP itself is thread-safe, it is difficult to impossible to guaran￾tee that third-party libraries used in extension code are also thread-safe. This means that even in a threaded

Web server, it is often necessary to not use a threaded PHP, but to use a forked process execution via the

fastcgi or cgi implementations.

Apache 2 uses a drop-in process architecture that allows it to be configured as a pre-fork, threaded, or

hybrid architecture, depending on your needs.

In contrast to the amount of configuration inside Apache, the PHP setup is very similar

to the way it was before.The only change to its configuration is to add the following to

its httpd.conf file:

Listen localhost:80

This binds the PHP instance exclusively to the loopback address. Now if you want to

access the Web server, you must contact it by going through the proxy server.

Benchmarking the effect of these changes is difficult. Because these changes reduce

the overhead mainly associated with handling clients over high-latency links, it is difficult

to measure the effects on a local or high-speed network. In a real-world setting, I have

seen a reverse-proxy setup cut the number of Apache children necessary to support a site

from 100 to 20.

Operating System Tuning for High Performance

There is a strong argument that if you do not want to perform local caching, then using

a reverse proxy is overkill.A way to get a similar effect without running a separate server

is to allow the operating system itself to buffer all the data. In the discussion of reverse

proxies earlier in this chapter, you saw that a major component of the network wait time

is the time spent blocking between data packets to the client.

The application is forced to send multiple packets because the operating system has a

limit on how much information it can buffer to send over a TCP socket at one time.

Fortunately, this is a setting that you can tune.

Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.

Language-Level Tunings 229

On FreeBSD, you can adjust the TCP buffers via the following:

#sysctl –w net.inet.tcp.sendspace=131072

#sysctl –w net.inet.tcp.recvspace=8192

On Linux, you do this:

#echo “131072” > /proc/sys/net/core/wmem_max

When you make either of these changes, you set the outbound TCP buffer space to

128KB and the inbound buffer space to 8KB (because you receive small inbound

requests and make large outbound responses).This assumes that the maximum page size

you will be sending is 128KB. If your page sizes differ from that, you need to change the

tunings accordingly. In addition, you might need to tune kern.ipc.nmbclusters to

allocate sufficient memory for the new large buffers. (See your friendly neighborhood

systems administrator for details.)

After adjusting the operating system limits, you need to instruct Apache to use the

large buffers you have provided. For this you just add the following directive to your

httpd.conf file:

SendBufferSize 131072

Finally, you can eliminate the network lag on connection close by installing the lingerd

patch to Apache.When a network connection is finished, the sender sends the receiver a

FIN packet to signify that the connection is complete.The sender must then wait for the

receiver to acknowledge the receipt of this FIN packet before closing the socket to

ensure that all data has in fact been transferred successfully.After the FIN packet is sent,

Apache does not need to do anything with the socket except wait for the FIN-ACK

packet and close the connection.The lingerd process improves the efficiency of this

operation by handing the socket off to an exterior daemon (lingerd), which just sits

around waiting for FIN-ACKs and closing sockets.

For high-volume Web servers, lingerd can provide significant performance benefits,

especially when coupled with increased write buffer sizes. lingerd is incredibly simple

to compile. It is a patch to Apache (which allows Apache to hand off file descriptors for

closing) and a daemon that performs those closes. lingerd is in use by a number of

major sites, including Sourceforge.com, Slashdot.org, and LiveJournal.com.

Proxy Caches

Even better than having a low-latency connection to a content server is not having to

make the request at all. HTTP takes this into account.

HTTP caching exists at many levels:

n Caches are built into reverse proxies

n Proxy caches exist at the end user’s ISP

n Caches are built in to the user’s Web browser

Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.

230 Chapter 9 External Performance Tunings

Figure 9.5 shows a typical reverse proxy cache setup.When a user makes a request to

www.example.foo, the DNS lookup actually points the user to the proxy server. If the

requested entry exists in the proxy’s cache and is not stale, the cached copy of the page is

returned to the user, without the Web server ever being contacted at all; otherwise, the

connection is proxied to the Web server as in the reverse proxy situation discussed earlier

in this chapter.

Figure 9.5 A request through a reverse proxy.

Many of the reverse proxy solutions, including Squid, mod_proxy, and mod_accel, sup￾port integrated caching. Using a cache that is integrated into the reverse proxy server is

an easy way of extracting extra value from the proxy setup. Having a local cache guaran￾tees that all cacheable content will be aggressively cached, reducing the workload on the

back-end PHP servers.

client

PHP webserver

client

reverse proxy

client

High Latency

Internet Traffic

Internet

return

cache

page

Is content

cached?

yes

low latency connection

no

Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.

Cache-Friendly PHP Applications 231

Cache-Friendly PHP Applications

To take advantage of caches, PHP applications must be made cache friendly.A cache￾friendly application understands how the caching policies in browsers and proxies work

and how cacheable its own data is.The application can then be set to send appropriate

cache-related directives with browsers to achieve the desired results.

There are four HTTP headers that you need to be conscious of in making an appli￾cation cache friendly:

n Last-Modified

n Expires

n Pragma: no-cache

n Cache-Control

The Last-Modified HTTP header is a keystone of the HTTP 1.0 cache negotiation

ability. Last-Modified is the Universal Time Coordinated (UTC; formerly GMT) date

of last modification of the page.When a cache attempts a revalidation, it sends the Last￾Modified date as the value of its If-Modified-Since header field so that it can let the

server know what copy of the content it should be revalidated against.

The Expires header field is the nonrevalidation component of HTTP 1.0 revalida￾tion.The Expires value consists of a GMT date after which the contents of the request￾ed documented should no longer be considered valid.

Many people also view Pragma: no-cache as a header that should be set to avoid

objects being cached.Although there is nothing to be lost by setting this header, the

HTTP specification does provide an explicit meaning for this header, so its usefulness is

regulated by it being a de facto standard implemented in many HTTP 1.0 caches.

In the late 1990s, when many clients spoke only HTTP 1.0, the cache negotiation

options for applications where rather limited. It used to be standard practice to add the

following headers to all dynamic pages:

function http_1_0_nocache_headers()

{

$pretty_modtime = gmdate(‘D, d M Y H:i:s’) . ‘ GMT’;

header(“Last-Modified: $pretty_modtime”);

header(“Expires: $pretty_modtime”);

header(“Pragma: no-cache”);

}

This effectively tells all intervening caches that the data is not to be cached and always

should be refreshed.

When you look over the possibilities given by these headers, you see that there are

some glaring deficiencies:

Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.

232 Chapter 9 External Performance Tunings

n Setting expiration time as an absolute timestamp requires that the client and server

system clocks be synchronized.

n The cache in a client’s browser is quite different than the cache at the client’s ISP.

A browser cache could conceivably cache personalized data on a page, but a proxy

cache shared by numerous users cannot.

These deficiencies were addressed in the HTTP 1.1 specification, which added the

Cache-Control directive set to tackle these problems.The possible values for a Cache￾Control response header are set in RFC 2616 and are defined by the following syntax:

Cache-Control = “Cache-Control” “:” l#cache-response-directive

cache-response-directive =

“public”

| “private”

| “no-cache”

| “no-store”

| “no-transform”

| “must-revalidate”

| “proxy-revalidate”

| “max-age” “=” delta-seconds

| “s-maxage” “=” delta-seconds

The Cache-Control directive specifies the cacheability of the document requested.

According to RFC 2616, all caches and proxies must obey these directives, and the head￾ers must be passed along through all proxies to the browser making the request.

To specify whether a request is cacheable, you can use the following directives:

n public—The response can be cached by any cache.

n private—The response may be cached in a nonshared cache.This means that the

request is to be cached only by the requestor’s browser and not by any intervening

caches.

n no-cache—The response must not be cached by any level of caching.The no￾store directive indicates that the information being transmitted is sensitive and

must not be stored in nonvolatile storage. If an object is cacheable, the final direc￾tives allow specification of how long an object may be stored in cache.

n must-revalidate—All caches must always revalidate requests for the page.

During verification, the browser sends an If-Modified-Since header in the

request. If the server validates that the page represents the most current copy of the

page, it should return a 304 Not Modified response to the client. Otherwise, it

should send back the requested page in full.

n proxy-revalidate—This directive is like must-revalidate, but with proxy￾revalidate, only shared caches are required to revalidate their contents.

n max-age—This is the time in seconds that an entry is considered to be cacheable

Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.

Tải ngay đi em, còn do dự, trời tối mất!