Siêu thị PDFTải ngay đi em, trời tối mất

Thư viện tri thức trực tuyến

Kho tài liệu với 50,000+ tài liệu học thuật

© 2023 Siêu thị PDF - Kho tài liệu học thuật hàng đầu Việt Nam

Subspace methods for system identification
PREMIUM
Số trang
400
Kích thước
13.2 MB
Định dạng
PDF
Lượt xem
1004

Subspace methods for system identification

Nội dung xem thử

Mô tả chi tiết

Communications and Control Engineering

Published titles include:

Stability and Stabilization of Infinite Dimensional Systems with Applications

Zheng-Hua Luo, Bao-Zhu Guo and Omer Morgul

Nonsmooth Mechanics (Second edition)

Bernard Brogliato

Nonlinear Control Systems II

Alberto Isidori

L2-Gain and Passivity Techniques in nonlinear Control

Arjan van der Schaft

Control of Linear Systems with Regulation and Input Constraints

Ali Saberi, Anton A. Stoorvogel and Peddapullaiah Sannuti

Robust and H∞ Control

Ben M. Chen

Computer Controlled Systems

Efim N. Rosenwasser and Bernhard P. Lampe

Dissipative Systems Analysis and Control

Rogelio Lozano, Bernard Brogliato, Olav Egeland and Bernhard Maschke

Control of Complex and Uncertain Systems

Stanislav V. Emelyanov and Sergey K. Korovin

Robust Control Design Using H∞Methods

Ian R. Petersen, Valery A. Ugrinovski and Andrey V. Savkin

Model Reduction for Control System Design

Goro Obinata and Brian D.O. Anderson

Control Theory for Linear Systems

Harry L. Trentelman, Anton Stoorvogel and Malo Hautus

Functional Adaptive Control

Simon G. Fabri and Visakan Kadirkamanathan

Positive 1D and 2D Systems

Tadeusz Kaczorek

Identification and Control Using Volterra Models

Francis J. Doyle III, Ronald K. Pearson and Bobatunde A. Ogunnaike

Non-linear Control for Underactuated Mechanical Systems

Isabelle Fantoni and Rogelio Lozano

Robust Control (Second edition)

Jürgen Ackermann

Flow Control by Feedback

Ole Morten Aamo and Miroslav Krsti´c

Learning and Generalization (Second edition)

Mathukumalli Vidyasagar

Constrained Control and Estimation

Graham C. Goodwin, María M. Seron and José A. De Doná

Randomized Algorithms for Analysis and Control of Uncertain Systems

Roberto Tempo, Giuseppe Calafiore and Fabrizio Dabbene

Switched Linear Systems

Zhendong Sun and Shuzhi S. Ge

Tohru Katayama

Subspace Methods for

System Identification

With 66 Figures

123

Tohru Katayama, PhD

Department of Applied Mathematics and Physics,

Graduate School of Informatics, Kyoto University, Kyoto 606-8501, Japan

Series Editors

E.D. Sontag · M. Thoma · A. Isidori · J.H. van Schuppen

British Library Cataloguing in Publication Data

Katayama, Tohru, 1942-

Subspace methods for system identification : a realization

approach. - (Communications and control engineering)

1. System indentification 2. Stochastic analysis

I. Title

003.1

ISBN-10: 1852339810

Library of Congress Control Number: 2005924307

Apart from any fair dealing for the purposes of research or private study, or criticism or review, as

permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced,

stored or transmitted, in any form or by any means, with the prior permission in writing of the

publishers, or in the case of reprographic reproduction in accordance with the terms of licences issued

by the Copyright Licensing Agency. Enquiries concerning reproduction outside those terms should be

sent to the publishers.

Communications and Control Engineering Series ISSN 0178-5354

ISBN-10 1-85233-981-0

ISBN-13 978-1-85233-981-4

Springer Science+Business Media

springeronline.com

© Springer-Verlag London Limited 2005

MATLAB® is the registered trademark of The MathWorks, Inc., 3 Apple Hill Drive Natick, MA 01760-

2098, U.S.A. http://www.mathworks.com

The use of registered names, trademarks, etc. in this publication does not imply, even in the absence of

a specific statement, that such names are exempt from the relevant laws and regulations and therefore

free for general use.

The publisher makes no representation, express or implied, with regard to the accuracy of the infor￾mation contained in this book and cannot accept any legal responsibility or liability for any errors or

omissions that may be made.

Typesetting: Camera ready by author

Production: LE-TEX Jelonek, Schmidt & Vöckler GbR, Leipzig, Germany

Printed in Germany

69/3141-543210 Printed on acid-free paper SPIN 11370000

To my family

Preface

Numerous papers on system identification have been published over the last 40 years.

Though there were substantial developments in the theory of stationary stochastic

processes and multivariable statistical methods during 1950s, it is widely recognized

that the theory of system identification started only in the mid-1960s with the pub￾lication of two important papers; one due to Astr¨ ˚ om and Bohlin [17], in which the

maximum likelihood (ML) method was extended to a serially correlated time series

to estimate ARMAX models, and the other due to Ho and Kalman [72], in which

the deterministic state space realization problem was solved for the first time using a

certain Hankel matrix formed in terms of impulse responses. These two papers have

laid the foundation for the future developments of system identification theory and

techniques [55].

The scope of the ML identification method of Astr¨ ˚ om and Bohlin [17] was

to build single-input, single-output (SISO) ARMAX models from observed input￾output data sequences. Since the appearance of their paper, many statistical identifi￾cation techniques have been developed in the literature, most of which are now com￾prised under the label of prediction error methods (PEM) or instrumental variable

(IV) methods. This has culminated in the publication of the volumes Ljung [109] and

S¨oderstr¨om and Stoica [145]. At this moment we can say that theory of system iden￾tification for SISO systems is established, and the various identification algorithms

have been well tested, and are now available as MATLAB￾R programs.

Also, identification of multi-input, multi-output (MIMO) systems is an important

problem which is not dealt with satisfactorily by PEM methods. The identification

problem based on the minimization of a prediction error criterion (or a least-squares

type criterion), which in general is a complicated function of the system parameters,

has to be solved by iterative descent methods which may get stuck into local min￾ima. Moreover, optimization methods need canonical parametrizations and it may

be difficult to guess a suitable canonical parametrization from the outset. Since no

single continuous parametrization covers all possible multivariable linear systems

with a fixed McMillan degree, it may be necessary to change parametrization in the

course of the optimization routine. Thus the use of optimization criteria and canon￾ical parametrizations can lead to local minima far from the true solution, and t

viii Preface

numerically ill-conditioned problems due to poor identifiability, i.e., to near insensi￾tivity of the criterion to the variations of some parameters. Hence it seems that the

PEM method has inherent difficulties for MIMO systems.

On the other hand, stochastic realization theory, initiated by Faurre [46] and

Akaike [1] and others, has brought in a different philosophy of building models from

data, which is not based on optimization concepts. A key step in stochastic realiza￾tion is either to apply the deterministic realization theory to a certain Hankel matrix

constructed with sample estimates of the process covariances, or to apply the canon￾ical correlation analysis (CCA) to the future and past of the observed process. These

algorithms have been shown to be implemented very efficiently and in a numerically

stable way by using the tools of modern numerical linear algebra such as the singular

value decomposition (SVD).

Then, a new effort in digital signal processing and system identification based on

the QR decomposition and the SVD emerged in the mid-1980s and many papers have

been published in the literature [100, 101, 118, 119], etc. These realization theory￾based techniques have led to a development of various so-called subspace identifica￾tion methods, including [163,164,169,171–173], etc. Moreover, Van Overschee and

De Moor [165] have published a first comprehensive book on subspace identification

of linear systems. An advantage of subspace methods is that we do not need (non￾linear) optimization techniques, nor we need to impose to the system a canonical

form, so that subspace methods do not suffer from the inconveniences encountered

in applying PEM methods to MIMO system identification.

Though I have been interested in stochastic realization theory for many years,

it was around 1990 that I actually resumed studies on realization theory, including

subspace identification methods. However, realization results developed for deter￾ministic systems on the one hand, and stochastic systems on the other, could not be

applied to the identification of dynamic systems in which both a deterministic test

input and a stochastic disturbance are involved. In fact, the deterministic realization

result does not consider any noise, and the stochastic realization theory developed up

to the early 1990s did address modeling of stochastic processes, or time series, only.

Then, I noticed at once that we needed a new realization theory to understand many

existing subspace methods and their underlying relations and to develop advanced

algorithms. Thus I was fully convinced that a new stochastic realization theory in

the presence of exogenous inputs was needed for further developments of subspace

system identification theory and algorithms.

While we were attending the MTNS (The International Symposium on Math￾ematical Theory of Networks and Systems) at Regensburg in 1993, I suggested to

Giorgio Picci, University of Padova, that we should do joint work on stochastic re￾alization theory in the presence of exogenous inputs and a collaboration between us

started in 1994 when he stayed at Kyoto University as a visiting professor. Also, I

successively visited him at the University of Padova in 1997. The collaboration has

resulted in several joint papers [87–90, 93, 130, 131]. Professor Picci has in partic￾ular introduced the idea of decomposing the output process into deterministic and

stochastic components by using a preliminary orthogonal decomposition, and then

applying the existing deterministic and stochastic realization techniques to each com-

Preface ix

ponent to get a realization theory in the presence of exogenous input. On the other

hand, inspired by the CCA-based approach, I have developed a method of solving a

multi-stage Wiener prediction problem to derive an innovation representation of the

stationary process with an observable exogenous input, from which subspace identi￾fication methods are successfully obtained.

This book is an outgrowth of the joint work with Professor Picci on stochastic

realization theory and subspace identification. It provides an in-depth introduction to

subspace methods for system identification of discrete-time linear systems, together

with our results on realization theory in the presence of exogenous inputs and sub￾space system identification methods. I have included proofs of theorems and lemmas

as much as possible, as well as solutions to problems, in order to facilitate the basic

understanding of the material by the readers and to minimize the effort needed to

consult many references.

This textbook is divided into three parts: Part I includes reviews of basic results,

from numerical linear algebra to Kalman filtering, to be used throughout this book,

Part II provides deterministic and stochastic realization theories developed by Ho

and Kalman, Faurre, and Akaike, and Part III discusses stochastic realization results

in the presence of exogenous inputs and their adaptation to subspace identification

methods; see Section 1.6 for more details. Thus, various people can read this book ac￾cording to their needs. For example, people with a good knowledge of linear system

theory and Kalman filtering can begin with Part II. Also, people mainly interested

in applications can just read the algorithms of the various identification methods in

Part III, occasionally returning to Part I and/or Part II when needed. I believe that

this textbook should be suitable for advanced students, applied scientists and engi￾neers who want to acquire solid knowledge and algorithms of subspace identification

methods.

I would like to express my sincere thanks to Giorgio Picci who has greatly con￾tributed to our fruitful collaboration on stochastic realization theory and subspace

identification methods over the last ten years. I am deeply grateful to Hideaki Sakai,

who has read the whole manuscript carefully and provided invaluable suggestions,

which have led to many changes in the manuscript. I am also grateful to Kiyotsugu

Takaba and Hideyuki Tanaka for their useful comments on the manuscript. I have

benefited from joint works with Takahira Ohki, Toshiaki Itoh, Morimasa Ogawa,

and Hajime Ase, who told me about many problems regarding modeling and identi￾fication of industrial processes.

The related research from 1996 through 2004 has been sponsored by the Grant￾in-Aid for Scientific Research, the Japan Society of Promotion of Sciences, which is

gratefully acknowledged.

Tohru Katayama

Kyoto, Japan

January 2005

Contents

1 Introduction ................................................... 1

1.1 System Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

1.2 Classical Identification Methods . . . ........................... 4

1.3 Prediction Error Method for State Space Models . . . . . . . . . . . . . . . . 6

1.4 Subspace Methods of System Identification . . . . ................. 8

1.5 Historical Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

1.6 Outline of the Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

1.7 Notes and References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

Part I Preliminaries

2 Linear Algebra and Preliminaries. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

2.1 Vectors and Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

2.2 Subspaces and Linear Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

2.3 Norms of Vectors and Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

2.4 QR Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

2.5 Projections and Orthogonal Projections . . . . . . . . . . . . . . . . . . . . . . . . 27

2.6 Singular Value Decomposition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

2.7 Least-Squares Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

2.8 Rank of Hankel Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

2.9 Notes and References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

2.10 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

3 Discrete-Time Linear Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

3.1 ￾-Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

3.2 Discrete-Time LTI Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

3.3 Norms of Signals and Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

3.4 State Space Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

3.5 Lyapunov Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

3.6 Reachability and Observability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

xii Contents

3.7 Canonical Decomposition of Linear Systems. . . . . . . . . . . . . . . . . . . . 55

3.8 Balanced Realization and Model Reduction . . . . . . . . . . . . . . . . . . . . . 58

3.9 Realization Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

3.10 Notes and References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

3.11 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

4 Stochastic Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

4.1 Stochastic Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

4.1.1 Markov Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74

4.1.2 Means and Covariance Matrices . . . . . . . . . . . . . . . . . . . . . . . . 75

4.2 Stationary Stochastic Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

4.3 Ergodic Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

4.4 Spectral Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

4.5 Hilbert Space and Prediction Theory . . . . . . . . . . . . . . . . . . . . . . . . . . 87

4.6 Stochastic Linear Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

4.7 Stochastic Linear Time-Invariant Systems . . . . . . . . . . . . . . . . . . . . . . 98

4.8 Backward Markov Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101

4.9 Notes and References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104

4.10 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

5 Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

5.1 Multivariate Gaussian Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

5.2 Optimal Estimation by Orthogonal Projection . . . . . . . . . . . . . . . . . . . 113

5.3 Prediction and Filtering Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . 116

5.4 Kalman Filter with Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123

5.5 Covariance Equation of Predicted Estimate . . . . . . . . . . . . . . . . . . . . . 127

5.6 Stationary Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128

5.7 Stationary Backward Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . 131

5.8 Numerical Solution of ARE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134

5.9 Notes and References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136

5.10 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137

Part II Realization Theory

6 Realization of Deterministic Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141

6.1 Realization Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141

6.2 Ho-Kalman’s Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142

6.3 Data Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149

6.4 LQ Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155

6.5 MOESP Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157

6.6 N4SID Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161

6.7 SVD and Additive Noises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166

6.8 Notes and References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169

6.9 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170

Contents xiii

7 Stochastic Realization Theory (1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171

7.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171

7.2 Stochastic Realization Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174

7.3 Solution of Stochastic Realization Problem . . . . . . . . . . . . . . . . . . . . . 176

7.3.1 Linear Matrix Inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177

7.3.2 Simple Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180

7.4 Positivity and Existence of Markov Models . . . . . . . . . . . . . . . . . . . . . 183

7.4.1 Positive Real Lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183

7.4.2 Computation of Extremal Points . . . . . . . . . . . . . . . . . . . . . . . . 189

7.5 Algebraic Riccati-like Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192

7.6 Strictly Positive Real Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194

7.7 Stochastic Realization Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198

7.8 Notes and References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199

7.9 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200

7.10 Appendix: Proof of Lemma 7.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201

8 Stochastic Realization Theory (2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203

8.1 Canonical Correlation Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203

8.2 Stochastic Realization Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207

8.3 Akaike’s Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209

8.3.1 Predictor Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209

8.3.2 Markovian Representations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212

8.4 Canonical Correlations Between Future and Past . . . . . . . . . . . . . . . . 216

8.5 Balanced Stochastic Realization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217

8.5.1 Forward and Backward State Vectors . . . . . . . . . . . . . . . . . . . . 217

8.5.2 Innovation Representations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219

8.6 Reduced Stochastic Realization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223

8.7 Stochastic Realization Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227

8.8 Numerical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230

8.9 Notes and References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232

8.10 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233

8.11 Appendix: Proof of Lemma 8.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234

Part III Subspace Identification

9 Subspace Identification (1) – ORT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239

9.1 Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239

9.2 Stochastic Realization with Exogenous Inputs . . . . . . . . . . . . . . . . . . . 241

9.3 Feedback-Free Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243

9.4 Orthogonal Decomposition of Output Process . . . . . . . . . . . . . . . . . . . 245

9.4.1 Orthogonal Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245

9.4.2 PE Condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246

9.5 State Space Realizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248

9.5.1 Realization of Stochastic Component . . . . . . . . . . . . . . . . . . . 248

xiv Contents

9.5.2 Realization of Deterministic Component . . . . . . . . . . . . . . . . . 249

9.5.3 The Joint Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253

9.6 Realization Based on Finite Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254

9.7 Subspace Identification Method – ORT Method . . . . . . . . . . . . . . . . . 256

9.7.1 Subspace Identification of Deterministic Subsystem . . . . . . . 256

9.7.2 Subspace Identification of Stochastic Subsystem . . . . . . . . . . 259

9.8 Numerical Example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261

9.9 Notes and References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265

9.10 Appendix: Proofs of Theorem and Lemma . . . . . . . . . . . . . . . . . . . . . 265

9.10.1 Proof of Theorem 9.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265

9.10.2 Proof of Lemma 9.7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268

10 Subspace Identification (2) – CCA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271

10.1 Stochastic Realization with Exogenous Inputs . . . . . . . . . . . . . . . . . . . 271

10.2 Optimal Predictor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275

10.3 Conditional Canonical Correlation Analysis . . . . . . . . . . . . . . . . . . . . 278

10.4 Innovation Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282

10.5 Stochastic Realization Based on Finite Data . . . . . . . . . . . . . . . . . . . . 286

10.6 CCA Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288

10.7 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292

10.8 Notes and References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296

11 Identification of Closed-loop System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299

11.1 Overview of Closed-loop Identification . . . . . . . . . . . . . . . . . . . . . . . . 299

11.2 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301

11.2.1 Feedback System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301

11.2.2 Identification by Joint Input-Output Approach . . . . . . . . . . . . 303

11.3 CCA Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304

11.3.1 Realization of Joint Input-Output Process . . . . . . . . . . . . . . . . 304

11.3.2 Subspace Identification Method . . . . . . . . . . . . . . . . . . . . . . . . 307

11.4 ORT Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309

11.4.1 Orthogonal Decomposition of Joint Input-Output Process . . 309

11.4.2 Realization of Closed-loop System . . . . . . . . . . . . . . . . . . . . . 311

11.4.3 Subspace Identification Method . . . . . . . . . . . . . . . . . . . . . . . . 312

11.5 Model Reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315

11.6 Numerical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317

11.6.1 Example 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318

11.6.2 Example 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321

11.7 Notes and References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323

11.8 Appendix: Identification of Stable Transfer Matrices . . . . . . . . . . . . . 324

11.8.1 Identification of Deterministic Parts . . . . . . . . . . . . . . . . . . . . . 324

11.8.2 Identification of Noise Models . . . . . . . . . . . . . . . . . . . . . . . . . 325

Contents xv

Appendix

A Least-Squares Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329

A.1 Linear Regressions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329

A.2 LQ Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334

B Input Signals for System Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . 337

C Overlapping Parametrization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343

D List of Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349

D.1 Deterministic Realization Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 349

D.2 MOESP Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350

D.3 Stochastic Realization Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351

D.4 Subspace Identification Algorithms. . . . . . . . . . . . . . . . . . . . . . . . . . . . 353

E Solutions to Problems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357

Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377

References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389

1

Introduction

In this introductory chapter, we briefly review the classical prediction error method

(PEM) for identifying linear time-invariant (LTI) systems. We then discuss the basic

idea of subspace methods of system identification, together with the advantages of

subspace methods over the PEM as applied to multivariable dynamic systems.

1.1 System Identification

Figure 1.1 shows a schematic diagram of a dynamic system with input ￾, output 

and disturbance . We can observe ￾ and  but not ; we can directly manipulate

the input ￾ but not . Even if we do not know the inside structure of the system,

the measured input and output data provide useful information about the system

behavior. Thus, we can construct mathematical models to describe dynamics of the

system of interest from observed input-output data.

￾ ￾ ￾



￾ 

Figure 1.1. A system with input and disturbance

Dynamic models for prediction and control include transfer functions, state space

models, time-series models, which are parametrized in terms of finite number of

parameters. Thus these dynamic models are referred to as parametric models. Also

used are non-parametric models such as impulse responses, and frequency responses,

spectral density functions, etc.

System identification is a methodology developed mainly in the area of automatic

control, by which we can choose the best model(s) from a given model se

Tải ngay đi em, còn do dự, trời tối mất!