Siêu thị PDFTải ngay đi em, trời tối mất

Thư viện tri thức trực tuyến

Kho tài liệu với 50,000+ tài liệu học thuật

© 2023 Siêu thị PDF - Kho tài liệu học thuật hàng đầu Việt Nam

OPTICAL IMAGING AND SPECTROSCOPY Phần 10 doc
PREMIUM
Số trang
50
Kích thước
2.1 MB
Định dạng
PDF
Lượt xem
783

OPTICAL IMAGING AND SPECTROSCOPY Phần 10 doc

Nội dung xem thử

Mô tả chi tiết

field per unit solid angle and wavelength. The radiance is well-defined for quasi￾homogeneous fields as the Fourier transform of the cross-spectral density:

B(x, s, n) ¼

ð ð W(Dx, x, n)e(2pin)=csDx

d Dx (10:49)

Under this approximation, measurement of the radiance on a surface is equivalent to

measuring W. Of course, we observe in Eqn. (6.52) that if W(Dx, Dy, x, y, n) is invar￾iant with respect to x, y over the aperture of a lens, then the power spectral density in

the focal plane is

S(x, y, n) ¼ 4n2

c2F2

ð ð W(Dx, Dy, x, y, n) H nDx

2cF , nDy

2cF 

 exp 2ipn

xDx þ yDy

cF  d Dx d Dy (10:50)

where we set z ¼ F, H(u, v) is the optical transfer function and x, y can be taken as the

transverse position of the optical axis. Neglecting the OTF for a moment, we find

therefore that the power spectral density at the focus of a lens illuminated by quasi￾homogeneous source approximates the radiance, specifically

B x, sx ¼ x

F , sy ¼ y

F , n

   S(x, y, n) (10:51)

The radiance emitted by a translucent 3D object is effectively the x-ray projection

described, for example, by Eqn. (10.12). We have encountered such projections in

diverse contexts throughout the text. One may ray-trace the radiance to propagate

the field from one plane to the next to construct perspective views from diverse

vantage points or apply computed tomography to radiance data to reconstruct 3D

objects. As mentioned in our discussion of tomographic reconstruction in Section

2.6, the 4D radiance over a surface containing a 3D object overconstraints the tomo￾graphic inverse problem. Reconstruction may be achieved over a 3D projection space

satisfying Tuy’s condition. Computed tomography from focal images [74,168], from

RSI EDOF images [170], and from cubic phase EDOF images [72] are discussed by

Marks et al. More recently, optical projection tomography has been widely applied in

the analysis of translucent biological samples [220,221].

Optical projection microscopy commonly applies full solid angle sampling to

obtain diffraction limited 3D reconstruction. Remote sampling using projection tom￾ography, in contrast, relies on a more limited angular sampling range. Projection tom￾ography using a camera array is illustrated in Fig. 10.37. We assume in Fig. 10.37 that

the aperture of each camera is A, that the camera optical axes are dispersed over range

D in the transverse plane, and that the range to the object is zo. The band volume for

tomographic reconstruction from this camera array is determined by the angular range

460 COMPUTATIONAL IMAGING

Q ¼ D/zo. The sampling structure within this bandvolume is determined by the

camera-to-camera displacement and camera focal parameters.

Assuming that projections at angle u are uniformly sampled in l, one may identify

the projections illustrated in Fig. 10.38 from radiance measurements by the camera

array. The displacement Dl from one projection to the next corresponds to the trans￾verse resolution zol/A. According to Eqn. (2.52), the Fourier transform of the radi￾ance with respect to l for fixed s(u) yields an estimate of the Fourier transform of

the object along the ray at angle u illustrated in Fig. 10.39. The maximum spatial fre￾quency for this ray is determined by Dl such that ul,max ¼ A=zol. The spatial fre￾quency w along the z axis is ulsinQ. Assuming that the angular range D/zo

sampled by the camera array along the x and y axes is the same, the band volume

sampled by the array is illustrated in Fig. 10.40. The lack of z bandpass at low trans￾verse frequencies corresponds to the “missing cone” that we have encountered in

several other contexts. The z resolution obtained on tomographic reconstruction is

proportional to the transverse bandwidth of the object. For a point object, the

maximum spatial frequency wmax ¼ umaxsin Q ¼ AD=z2

ol occurs at the edge of the

band volume. The longitudinal resolution for tomographic reconstruction is

Dz ¼ 1

wmax

¼ z2

ol

AD (10:52)

Comparing with previous analyses in Sections 10.3 and 6.4, we see that the longitudi￾nal resolution is improved relative to a single aperture by the ratio 8D/A. The factor of

8 improvement arises from the fact that the tomographic band volume is maximal at

Figure 10.37 Projection tomography geometry. An object is observed by cameras of aperture

A at range zo. The range of camera positions is D. The angular observation range is Q  D=zo.

10.4 MULTIAPERTURE IMAGING 461

the edge of the transverse bandpass, while the 3D focal bandvolume falls to zero at

the limits of the transverse OTF. A multiple-camera array “synthesizes” an aperture of

radius D for improved longitudinal resolution.

Realistic objects are not translucent radiators such that the observed radiance is the

x-ray projection of the object density. As discussed by Marks et al. [168], occlusion

Figure 10.39 Fourier space recovered via the projection slice theorem from the samples of

Fig. 10.38.

Figure 10.38 Sampling of x-ray projections along angle u.

462 COMPUTATIONAL IMAGING

and opaque surfaces may lead to unresolvable ambiguities in radiance measurements.

In some cases, more camera perspectives than naive Radon analysis may be needed to

see around obscuring surfaces. In other cases, such as a uniformly radiating 3D

surface, somewhat fewer observations may suffice.

The assumption that the cross spectral density is spatially stationary (homo￾geneous) across each subaperture is central to the association of radiance and focal

spectral density or irradiance. With reference to Eqn. (6.71), this assumption is equiv￾alent to assuming that Dq/lz 1 over the range of the aperture and the depth of the

object. Dq ¼ A2

/2 is the variation in q over the aperture. Thus, the quasihomo￾geneous assumption holds if A ffiffiffiffiffiffiffi

2zl p . Simple projection tomography requires

one to restrict A to this limit. Of course, this strategy is unfortunate in that it also

limits transverse spatial resolution to lz=A  ffiffiffiffiffi

lz p .

Radiance-based computer vision is also based on Eqn. (10.51). For example, light

field photography uses an array of apertures to sample the radiance across an aperture

[151]. A basic light field camera, consisting of a 2D array of subapertures, samples

the radiance across a plane. The radiance may then be projected by ray tracing to esti￾mate the radiance in any other plane or may be processed by projection tomography

or data-dependent algorithms to estimate the object state from the field radiance.

While the full 4D radiance is redundant for translucent 3D objects, some advantages

in processing or scene fidelity may be obtained for opaque objects under structured

illumination. 4D sampling is important when W(Dx, Dy, x, y, n) cannot be reduced

to W(Dx, Dy, q, v). In such situations, however, one may find a camera array with

a diversity of focal and spectral sampling characteristics more useful than a 2D

array of identical imagers.

The plenoptic camera extends the light field approach to optical systems with non￾vanishing longitudinal resolution [1,153]. As illustrated in Fig. 10.41, a plenoptic

camera consists of an objective lens focusing on a microlens array coupled to a 2D

detector array. Each microlens covers an n  n block of pixels. Assuming that the

field is quasihomogeneous over each microlens aperture, the plenoptic camera

returns the radiance for n2 angular values at each microlens position. Recalling

Figure 10.40 Band volume covered by sampling over angular range D/zo ¼ 0.175 in units

of umax.

10.4 MULTIAPERTURE IMAGING 463

from Section 6.2 that the coherence cross section of an incoherent field focused

through a lens aperture A is approximately lf/#, we find that the assumption that

the field is quasihomogeneous corresponds to assuming that the image is slowly

varying on the scale of the transverse resolution. This assumption is, of course, gen￾erally violated by imaging systems. In the original plenoptic camera, a pupil plane

distortion is added to blur the image to obtain a quasihomogeneous field at the

focal plane. Alternatively, one could defocus the microlenses from the image plane

to blur the image into a quasihomogeneous state. The net effect of this approach is

that the system resolution is determined by the microlens aperture rather than the

objective aperture and the resolution advantages of the objective are lost. In view

of scaling issues in lens design and the advantages of projection tomography dis￾cussed earlier in this section, the plenoptic camera may be expected to be inferior

to an array of smaller objectives covering the same overall system aperture if one’s

goal is radiance measurement.

This does not imply, however, that the plenoptic camera or related multiaperture

sampling schemes are not useful in system design. The limited transverse resolution

is due to an inadequate forward model rather than physical limitation. In particular,

the need to restrict aperture size and object feature size is due the radiance field

approximation. With a more accurate physical model, one might attempt to simul￾taneously maximize transverse and longitudinal focal resolution. This approach

requires novel coding and estimation strategies; a conventional imaging system with

high longitudinal resolution cannot simultaneously focus on all ranges of interest.

The plenoptic camera may be regarded as a system that uses an objective to create

a compact 3D focal space and then uses a diversity of lenses to sample this space.

Many coding and analytical tools could be applied in such a system. For example,

a reference structure could be placed in the focal volume to encode 3D features

prior to lowpass filtering in the lenslets, pupil functions could be made to structure

the lenslet PSFs and encode points in the image volume, or filters could encode

diverse spectral projections in the lenslet images.

The idea of sampling the volume using diverse apertures is of particular interest in

microscope design. As discussed in Section 2.4, conventional microscope design

seeks to increase the angular extent of object features. In modern systems,

however, focal plane features may be of nearly the same size as the target object

Figure 10.41 Optical system for a plenoptic camera: (a) object; (b) blur filter; (c) objective

lens; (d) image; (e) microlens array; (f) detector array.

464 COMPUTATIONAL IMAGING

Tải ngay đi em, còn do dự, trời tối mất!