Siêu thị PDFTải ngay đi em, trời tối mất

Thư viện tri thức trực tuyến

Kho tài liệu với 50,000+ tài liệu học thuật

© 2023 Siêu thị PDF - Kho tài liệu học thuật hàng đầu Việt Nam

OPTICAL IMAGING AND SPECTROSCOPY Phần 7 potx
PREMIUM
Số trang
52
Kích thước
1.8 MB
Định dạng
PDF
Lượt xem
1216

OPTICAL IMAGING AND SPECTROSCOPY Phần 7 potx

Nội dung xem thử

Mô tả chi tiết

mask pixel. If each value of H can be independently selected, the number of code

values greatly exceeds the number of signal pixels reconstructed. Pixel coding is

commonly used in spectroscopy and spectral imaging. Structured spatial and tem￾poral modulation of object illumination is also an example of pixel coding. In

imaging systems, focal plane foveation and some forms of embedded readout

circuit processing may also be considered as pixel coding. The impulse response

of a pixel coded system is shift-variant. Physical constraints typically limit the

maximum value or total energy of the elements of H.

† Convolutional coding refers to systems with shift-invariant impulse reponse

h(x x0

). As we have seen in imaging system analysis, convolutional coding

is exceedingly common in optical systems, with conventional focal imaging as

the canonical example. Further examples arise in dispersive spectroscopy. We

further divide convolutional coding into projective coding, under which code par￾ameters directly modulate the spatial structure of the impulse response, and

Fourier coding, under which code parameters modulate the spatial structure of

the transfer function. Coded aperture imaging and computed tomography are

examples of projective coding systems. Section 10.2 describes the use of pupil

plane modulation to implement Fourier coding for extended depth of field. The

number of code elements in a convolutional code corresponds to the number

of resolution elements in the impulse response. Since the support of the

impulse response is usually much less than the support of the image, the

number of code elements per image pixel is much less than one.

† Implicit coding refers to systems where code parameters do not directly modulate

H. Rather, the physical structure of optical elements and the sampling geometry

are selected to create an invertible measurement code. Reference structure tom￾ography, van Cittert–Zernike-based imaging, and Fourier transform spec￾troscopy are examples of implicit coding. Spectral filtering using thin-film

filters is another example of implicit coding. More sophisticated spatiospectral

coding using photonic crystal, plasmonic, and thin-film filters are under explora￾tion. The number of coding parameters per signal pixel in current implicit coding

systems is much less than one, but as the science of complex optical design and

fabrication develops, one may imagine more sophisticated implicit coding

systems.

The goal of this chapter is to provide the reader with a context for discussing spec￾trometer and imager design in Chapters 9 and 10. We do not discuss physical

implementations of pixel, convolutional, or implicit codes in this chapter. Each

coding strategy arises in diverse situations; practical sensor codes often combine

aspects of all three. In considering sensor designs, the primary goal is always to

compare system performance metrics against design choices. Accurate sampling

and signal estimation models are central to such comparisons. We learned how to

model sampling in Chapter 7, the present chapter discusses basic stragies for

signal estimation and how these strategies impact code design for each type of code.

8.1 CODING TAXONOMY 303

The reader may find the pace of discussion a bit unusual in this chapter. Apt

comparison may be made with Chapter 3, which progresses from traditional

Fourier sampling theory through modern multiscale sampling. Similarly, the

present chapter describes results that are 50–200 years old in discussing linear esti￾mation strategies for pixel and convolutional coding in Sections 8.2 and 8.3. As with

wavelets in Chapter 3, Sections 8.4 and 8.5 describe relatively recent perspectives,

focusing in this case on regularization, generalized sampling, and nonlinear signal

inference. A sharp distinction exists in the impact of modern methods, however. In

the transition from Fourier to multiband sampling, new theories augment and

extend Shannon’s basic approach. Nonlinear estimators, on the other hand, substan￾tially replace and revolutionize traditional linear estimators and completely under￾mine traditional approaches to sampling code design. As indicated by the hierarchy

of data readout and processing steps described in Section 7.4, nonlinear processing

has become ubiquitous even in the simplest and most isomorphic sensor systems.

A system designer refusing to apply multiscale methods can do reasonable, if unfor￾tunately constrained, work, but competitive design cannot refuse the benefits of non￾linear inference.

While the narrative of this chapter through coding strategies also outlines the basic

landscape of coding and inverse problems, our discussion just scratches the surface of

digital image estimation and analysis. We cannot hope to provide even a representa￾tive bibliography, but we note that more recent accessible discussions of inverse pro￾blems in imaging are presented by Blahut [21], Bertero and Boccacci [19], and

Barrett and Myers [8]. The point estimation problem and regularization methods

are well covered by Hansen [111], Vogel [241], and Aster et al. [6]. A modern text

covering image processing, generalized sampling, and convex optimization has yet

to be published, but the text and extensive websites of Boyd and Vandenberghe

[24] provide an excellent overview of the broad problem.

8.2 PIXEL CODING

Let f be a discrete representation of an optical signal, and let g represent a measure￾ment. We assume that both f and g represent optical power densities, meaning that

fi and gi are real with fi, gi  0. The transformation from f to g is

g ¼ Hf þ n (8:1)

where n represents measurement noise. Pixel coding consists of codesign of the

elements of H and a signal estimation algorithm.

The range of the code elements hij is constrained in physical systems. Typically, hij

is nonnegative. Common additional constraints include 0  hij  1 or P

i hij  1.

Design of H subject to constraints is a weighing design problem. A classic

example of the weighing design problem is illustrated in Fig. 8.3. The problem is

to determine the masses of N objects using a balance. One may place objects

singly or in groups on the left or right side. One places a calibrated mass on the

304 CODING AND INVERSE PROBLEMS

right side to balance the scale. The ith measurement takes the form

gi þX

j

hijmj ¼ 0 (8:2)

where mj is the mass of the jth object. hij is þ1 for objects on the right, 21 for objects

on the left and 0 for objects left out of the ith measurement. While one might naively

choose to weigh each object on the scale in series (e.g., select hij ¼ dij), this strategy

is just one of many possible weighing designs and is not necessarily the one that pro￾duces the best estimate of the object weights. The “best” strategy is the one that

enables the most accurate estimation of the weights in the context of a noise and

error model for measurement. If, for example, the error in each measurement is inde￾pendent of the masses weighed, then one can show that the mean-square error in

weighing the set of objects is reduced by group testing using the Hadamard testing

strategy discussed below.

8.2.1 Linear Estimators

In statistics, the problem of estimating f from g in Eqn. (8.1) is called point estimation.

The most common solution relies on a regression model with a goal of minimizing

the difference between the measurement vector Hfe produced by an estimate of f

and the observed measurements g. The mean-square regression error is

1(fe) ¼ (g Hfe)

0 h i (g Hfe) (8:3)

The minimum of 1 with respect to fe occurs at @1=@fe ¼ 0, which is equivalent to

H0

g þ H0

Hfe ¼ 0 (8:4)

This produces the ordinary least-squares (OLS) estimator for f:

fe ¼ (H0

H)

1

H0

g (8:5)

Figure 8.3 Weighing objects on a balance.

8.2 PIXEL CODING 305

So far, we have made no assumptions about the noise vector n. We have only

assumed that our goal is to find a signal estimate that minimizes the mean-square

error when placed in the forward model for the measurement. If the expected value

of the noise vector h i n is nonzero, then the linear estimate f e will in general be

biased. If, on the other hand

h i n ¼ 0 (8:6)

and

nn0 h i ¼ s2

I (8:7)

then the OLS estimator is unbiased and the covariance of the estimate is

Sfe ¼ s2

(H0

H)

1 (8:8)

The Gauss–Markov theorem [147] states that the OLS estimator is the best linear

unbiased estimator where “best” in this context means that the covariance is

minimal. Specifically, if S~f e is the covariance for another linear estimator ˜

fe, then

S~f e Sfe is a positive semidefinite matrix.

In practical sensor systems, many situations arise in which the axioms of the

Gauss–Markov theorem are not valid and in which nonlinear estimators are preferred.

The OLS estimator, however, is a good starting point for the fundamental challenge

of sensor system coding, which is to codesign H and signal inference algorithms so as

to optimize system performance metrics. Suppose, specifically, that the system metric

is the mean-square estimation error

s2

e ¼ 1

N

trace Sfe

(8:9)

where H0

H is an N  N matrix. If we choose the OLS estimator as our signal infer￾ence algorithm, then the system metric is optimized by choosing H to minimize

trace[(H0

H)

1].

The selection of H for a given measurement system balances the goal of minimiz￾ing estimation error against physical implementation constraints. In the case that

P

j hij  1, for example, the best choice is the identity hij ¼ dij. This is the most

common case for imaging, where the amount of energy one can extract from each

pixel is finite.

8.2.2 Hadamard Codes

Considering the weighing design constraint jhijj  1, Hotelling proved in 1944 that

for hij [ [1, 1]

s2

e  s2

N (8:10)

under the assumptions of Eqn. (8.6). The measurement matrix H that achieves

Hotelling’s minimum estimation variance had been explored a half century earlier

306 CODING AND INVERSE PROBLEMS

by Hadamard. A Hadamard matrix Hn of order n is an n  n matrix with elements

hij [ {1, þ1} such that

HnH0

n ¼ nI (8:11)

where I is the n  n identity matrix. As an example, we have

H2 ¼ þ þ

þ   (8:12)

If Ha and Hb are Hadamard matrices, then the Kro¨necker product Hab ¼ Ha  Hb is

a Hadamard matrix of order ab. Applying this rule to H2, we find

H4 ¼

þþþþ

þþ

þþ

þþ

2

6

6

4

3

7

7

5

(8:13)

Recursive application of the Kro¨necker product yields Hadamard matrices for n ¼ 2m.

In addition to n ¼ 1 and n ¼ 2, it is conjectured that Hadamard matrices exist for all

n ¼ 4m, where m is an integer. Currently (2008) n ¼ 668 (m ¼ 167) is the smallest

number for which this conjecture is unproven.

Assuming that the measurement matrix H is a Hadamard matrix H0

H ¼ NI,

we obtain

Sfe ¼ s2

N

I (8:14)

and

s2

e ¼ s2

N (8:15)

If there is no Hadamard matrix of order N, the minimum variance is somewhat worse.

Hotelling also considered measurements hij [ 0, 1, which arises for weighing

with a spring scale rather than a balance. The nonnegative measurement constraint

0 , hij , 1 is common in imaging and spectroscopy. As discussed by Harwit and

Sloane [114], minimum variance least-squares estimation under this constraint is

achieved using the Hadamard S matrix:

Sn ¼ 1

2 (1 Hn) (8:16)

Under this definition, the first row and column of Sn vanish, meaning that Sn is an

(n 21)  (n 21) measurement matrix. The effect of using the S matrix of order n

rather than the bipolar Hadamard matrix is an approximately four-fold increase in

the least-squares variance.

8.2 PIXEL CODING 307

Tải ngay đi em, còn do dự, trời tối mất!
OPTICAL IMAGING AND SPECTROSCOPY Phần 7 potx | Siêu Thị PDF