Thư viện tri thức trực tuyến
Kho tài liệu với 50,000+ tài liệu học thuật
© 2023 Siêu thị PDF - Kho tài liệu học thuật hàng đầu Việt Nam

OPTICAL IMAGING AND SPECTROSCOPY Phần 2 ppt
Nội dung xem thử
Mô tả chi tiết
Figure 2.28 Base transmission pattern, tiled mask, and inversion deconvolution for p ¼ 5.
Figure 2.29 Base transmission pattern, tiled mask, and inversion deconvolution for p ¼ 11.
38 GEOMETRIC IMAGING
implemented under cyclic boundary conditions rather than using zero padding. In
contrast with a pinhole system, the number of pixels in the reconstructed coded aperture image is equal to the number of pixels in the base transmission pattern.
Figures 2.31–2.33 are simulations of coded aperture imaging with the 59 59-element
MURA code. As illustrated in the figure, the measured 59 59-element data are
strongly positive. For this image the maximum noise-free measurement value is
100, and the minimum value is 58, for a measurement dynamic range of ,2. We
will discuss noise and entropic measures of sensor system performance at various
points in this text, in our first encounter with a multiplex measurement system we
simply note that isomorphic measurement of the image would produce a much
higher measurement dynamic range for this image.
In practice, noise sensitivity is a primary concern in coded aperture and other multiplex sensor systems. For the MURA-based coded aperture system, Gottesman and
Fenimore [102] argue that the pixel signal-to-noise ratio is
SNRij ¼ Nfij ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
Nfij þ N P
kl fkl þ P
kl Bkl p (2:47)
where N is the number of holes in the coded aperture and Bkl is the noise in the (kl)th
pixel. The form of the SNR in this case is determined by signal-dependent, or “shot,”
noise. We discuss the noise sources in electronic optical detectors in Chapter 5 and
Figure 2.30 Base transmission pattern, tiled mask, and inversion deconvolution for p ¼ 59.
2.5 PINHOLE AND CODED APERTURE IMAGING 39
Figure 2.31 Coded aperture imaging simulation with no noise for the 59 59-element code
of Fig. 2.30.
Figure 2.32 Coded aperture imaging simulation with shot noise for the 59 59-element
code of Fig. 2.30.
40 GEOMETRIC IMAGING
derive the square-root characteristic form of shot noise in particular. For the 59 59
MURA aperture, N ¼ 1749. If we assume that the object consists of binary values 1
and 0, the maximum pixel SNR falls from 41 for a point object to 3 for an object with
200 points active. The smiley face object of Fig. 2.31 consists of 155 points.
Dependence of the SNR on object complexity is a unique feature of multiplex
sensor systems. The equivalent of Eqn. (2.47) for a focal imaging system is
SNRij ¼ Nfij ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
Nfij þ Bij p (2:48)
This system produces an SNR of approximately ffiffiffiffi
N p independent of the number of
points in the object.
As with the canonical wave and correlation field multiplex systems presented
in Sections 10.2 and 6.4.2, coded aperture imaging provides a very high depth
field image but also suffers from the same SNR deterioration in proportion to
source complexity.
2.6 PROJECTION TOMOGRAPHY
To this point we have considered images as two-dimensional distributions, despite
the fact that target objects and the space in which they are embedded are typically
Figure 2.33 Coded aperture imaging simulation with additive noise for the 59 59-element
code of Fig. 2.30.
2.6 PROJECTION TOMOGRAPHY 41
three-dimensional. Historically, images were two-dimensional because focal imaging
is a plane-to-plane transformation and because photochemical and electronic detector
arrays are typically 2D films or focal planes. Using computational image synthesis,
however, it is now common to form 3D images from multiplex measurements. Of
course, visualization and display of 3D images then presents new and different
challenges.
A variety of methods have been applied to 3D imaging, including techniques
derived from analogy with biological stereo vision systems and actively illuminated
acoustic and optical ranging systems. Each approach has advantages specific to targeted object classes and applications. Ranging and stereo vision are best adapted
to opaque objects where the goal is to estimate a surface topology embedded in
three dimensions.
The present section and the next briefly overview tomographic methods for multidimensional imaging. These sections rely on analytical techniques and concepts, such
as linear transform theory, the Fourier transform and vector spaces, which are not formally introduced until Chapter 3. The reader unfamiliar with these concepts may find
it useful to read the first few sections of that chapter before proceeding. Our survey of
computed tomography is necessarily brief; detailed surveys are presented by Kak and
Slaney [131] and Buzug [37].
Tomography relies on a simple 3D extension of the density-based object model
that we have applied in this chapter. The word tomography is derived from the
Greek tomos, meaning slice or section, and graphia, meaning describing. The
word predates computational methods and originally referred to an analog technique
for imaging a cross section of a moving object. While tomography is sometimes used
to refer to any method for measuring 3D distributions (i.e., optical coherence
tomography; Section 6.5), computed tomography (CT) generally refers to the projection methods described in this section.
Despite our focus on 3D imaging, we begin by considering tomography of 2D
objects using a one-dimensional detector array. 2D analysis is mathematically
simpler and is relevant to common X-ray illumination and measurement hardware.
2D slice tomography systems are illustrated in Fig. 2.34. In parallel beam systems,
a collimated beam of X rays illuminates the object. The object is rotated in front of
the X-ray source and one-dimensional detector opposite the source measures the integrated absoption along a line through the object for each ray component.
As always, the object is described by a density function f(x, y). Defining, as
illustrated in Fig. 2.35, l to be the distance of a particular ray from the origin, u to
be the angle between a normal to the ray and the x axis, and a to be the distance
along the ray, measurements collected by a parallel beam tomography system take
the form
g lð Þ¼ , u
ð
f lð Þ cos u a sin u, lsin u þ a cos u da (2:49)
where g(l, u) is the Radon transform of f(x, y). The Radon transform is defined
for f [ L2
(Rn
) as the integral of f over all hyperplanes of dimension n 2 1. Each
42 GEOMETRIC IMAGING