Siêu thị PDFTải ngay đi em, trời tối mất

Thư viện tri thức trực tuyến

Kho tài liệu với 50,000+ tài liệu học thuật

© 2023 Siêu thị PDF - Kho tài liệu học thuật hàng đầu Việt Nam

Tài liệu Independent component analysis P6 pptx
MIỄN PHÍ
Số trang
20
Kích thước
452.1 KB
Định dạng
PDF
Lượt xem
1490

Tài liệu Independent component analysis P6 pptx

Nội dung xem thử

Mô tả chi tiết

6

Principal Component

Analysis and Whitening

Principal component analysis (PCA) and the closely related Karhunen-Loeve trans- `

form, or the Hotelling transform, are classic techniques in statistical data analysis,

feature extraction, and data compression, stemming from the early work of Pearson

[364]. Given a set of multivariate measurements, the purpose is to find a smaller set of

variables with less redundancy, that would give as good a representation as possible.

This goal is related to the goal of independent component analysis (ICA). However,

in PCA the redundancy is measured by correlations between data elements, while

in ICA the much richer concept of independence is used, and in ICA the reduction

of the number of variables is given less emphasis. Using only the correlations as in

PCA has the advantage that the analysis can be based on second-order statistics only.

In connection with ICA, PCA is a useful preprocessing step.

The basic PCA problem is outlined in this chapter. Both the closed-form solution

and on-line learning algorithms for PCA are reviewed. Next, the related linear

statistical technique of factor analysis is discussed. The chapter is concluded by

presenting how data can be preprocessed by whitening, removing the effect of first￾and second-order statistics, which is very helpful as the first step in ICA.

6.1 PRINCIPAL COMPONENTS

The starting point for PCA is a random vector x with n elements. There is available

a sample x  xT from this random vector. No explicit assumptions on the

probability density of the vectors are made in PCA, as long as the first- and second￾order statistics are known or can be estimated from the sample. Also, no generative

125

Independent Component Analysis. Aapo Hyvarinen, Juha Karhunen, Erkki Oja ¨

Copyright  2001 John Wiley & Sons, Inc.

ISBNs: 0-471-40540-X (Hardback); 0-471-22131-7 (Electronic)

126 PRINCIPAL COMPONENT ANALYSIS AND WHITENING

model is assumed for vector x. Typically the elements of x are measurements like

pixel gray levels or values of a signal at different time instants. It is essential in

PCA that the elements are mutually correlated, and there is thus some redundancy

in x, making compression possible. If the elements are independent, nothing can be

achieved by PCA.

In the PCA transform, the vector x is first centered by subtracting its mean:

x  x Efxg

The mean is in practice estimated from the available sample x  xT (see

Chapter 4). Let us assume in the following that the centering has been done and thus

Efxg  . Next, x is linearly transformed to another vector y with m elements,

m n, so that the redundancy induced by the correlations is removed. This is

done by finding a rotated orthogonal coordinate system such that the elements of x in the new coordinates become uncorrelated. At the same time, the variances of

the projections of x on the new coordinate axes are maximized so that the first axis

corresponds to the maximal variance, the second axis corresponds to the maximal

variance in the direction orthogonal to the first axis, and so on.

For instance, if x has a gaussian density that is constant over ellipsoidal surfaces

in the n-dimensional space, then the rotated coordinate system coincides with the

principal axes of the ellipsoid. A two-dimensional example is shown in Fig. 2.7 in

Chapter 2. The principal components are now the projections of the data points on the

two principal axes, e and e. In addition to achieving uncorrelated components, the

variances of the components (projections) also will be very different in most appli￾cations, with a considerable number of the variances so small that the corresponding

components can be discarded altogether. Those components that are left constitute

the vector y.

As an example,take a set of  pixel windows from a digital image,an application

that is considered in detail in Chapter 21. They are first transformed, e.g., using row￾by-row scanning, into vectors x whose elements are the gray levels of the 64 pixels

in the window. In real-time digital video transmission, it is essential to reduce this

data as much as possible without losing too much of the visual quality, because the

total amount of data is very large. Using PCA, a compressed representation vector y

can be obtained from x, which can be stored or transmitted. Typically, y can have as

few as 10 elements, and a good replica of the original   image window can still

be reconstructed from it. This kind of compression is possible because neighboring

elements of x, which are the gray levels of neighboring pixels in the digital image,

are heavily correlated. These correlations are utilized by PCA, allowing almost the

same information to be represented by a much smaller vector y. PCA is a linear

technique, so computing y from x is not heavy, which makes real-time processing

possible.

Tải ngay đi em, còn do dự, trời tối mất!