Thư viện tri thức trực tuyến
Kho tài liệu với 50,000+ tài liệu học thuật
© 2023 Siêu thị PDF - Kho tài liệu học thuật hàng đầu Việt Nam

Environmental and Hydrological Systems Modelling
Nội dung xem thử
Mô tả chi tiết
Environmental
and Hydrological
Systems Modelling
A. W. Jayawardena
Jayawardena
ISBN-13: 978-0-415-46532-8
9 780415 465328
9 0 0 0 0
RU54409
EnvironmEntal EnginEEring
Mathematical modelling has become an indispensable tool for engineers,
scientists, planners, decision makers and many other professionals to make
predictions of future scenarios as well as real impending events. As the
modelling approach and the model to be used are problem specific, no
single model or approach can be used to solve all problems, and there are
constraints in each situation. Modellers therefore need to have a choice
when confronted with constraints such as lack of sufficient data, resources,
expertise and time.
Environmental and Hydrological Systems Modelling provides the tools
needed by presenting different approaches to modelling the water
environment over a range of spatial and temporal scales. Their applications
are shown with a series of case studies, taken mainly from the Asia-Pacific
Region. Coverage includes:
• Linear Systems
• Conceptual Models
• Data Driven Models
• Process-Based Models
• Risk-Management Models
• Model Parameter Estimation
• Model Calibration, Validation and Testing
This book will be of great value to advanced students, professionals,
academics and researchers working in the water environment.
A. W. Jayawardena is an Adjunct Professor at The University of Hong Kong and
Technical Advisor to Nippon Koei Company Ltd. (Consulting Engineers), Japan.
Environmental and Hydrological
Systems Modelling
RU54409_Cover_mech.indd All Pages 12/3/13 8:53 AM
Environmental
and Hydrological
Systems Modelling
CRC Press is an imprint of the
Taylor & Francis Group, an informa business
Boca Raton London New York
Environmental
and Hydrological
Systems Modelling
A. W. Jayawardena
MATLAB® is a trademark of The MathWorks, Inc. and is used with permission. The MathWorks does not warrant the
accuracy of the text or exercises in this book. This book’s use or discussion of MATLAB® software or related products
does not constitute endorsement or sponsorship by The MathWorks of a particular pedagogical approach or particular
use of the MATLAB® software.
CRC Press
Taylor & Francis Group
6000 Broken Sound Parkway NW, Suite 300
Boca Raton, FL 33487-2742
© 2014 by Taylor & Francis Group, LLC
CRC Press is an imprint of Taylor & Francis Group, an Informa business
No claim to original U.S. Government works
Version Date: 20131216
International Standard Book Number-13: 978-0-203-92744-1 (eBook - PDF)
This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been
made to publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright
holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this
form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may
rectify in any future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information storage or retrieval system, without written permission from the
publishers.
For permission to photocopy or use material electronically from this work, please access www.copyright.com (http://
www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923,
978-750-8400. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For
organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged.
Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for
identification and explanation without intent to infringe.
Visit the Taylor & Francis Web site at
http://www.taylorandfrancis.com
and the CRC Press Web site at
http://www.crcpress.com
© 2010 Taylor & Francis Group, LLC v
Contents
Preface xvii
Author xix
1 Introduction 1
1.1 Some definitions 1
1.1.1 System 1
1.1.2 State of a system 2
1.2 General systems theory (GST) 3
1.3 Ecological systems (Ecosystems) 4
1.4 Equi-finality 4
1.5 Scope and layout 5
References 7
2 Historical development of hydrological modelling 9
2.1 Basic concepts and governing equation of linear systems 9
2.1.1 Time domain analysis 9
2.1.1.1 Types of input functions 10
2.1.1.2 System response function – convolution integral 12
2.1.2 Frequency domain analysis 12
2.1.2.1 Fourier transform – frequency response function (FRF) 12
2.1.2.2 Laplace transform 14
2.1.2.3 z-Transform 15
2.2 Linear systems in hydrological modelling 16
2.2.1 Hydrological systems 16
2.2.2 Unit hydrograph 17
2.2.2.1 Unit hydrograph for a complex storm 18
2.2.2.2 Instantaneous unit hydrograph (IUH) 20
2.2.2.3 Empirical unit hydrograph 20
2.2.2.4 Unit pulse response function 21
2.2.3 Linear reservoir 21
2.2.4 Linear cascade 23
2.2.5 Linear channel 25
2.2.6 Time–area diagram 26
2.3 Random processes and linear systems 27
vi Contents
© 2010 Taylor & Francis Group, LLC
2.4 Non-linear systems 29
2.4.1 Determination of the kernel functions 29
2.5 Multilinear or parallel systems 31
2.6 Flood routing 31
2.6.1 Inventory method 31
2.6.2 Muskingum method 32
2.6.2.1 Estimation of the routing parameters K and c 33
2.6.2.2 Limitations of the Muskingum method 35
2.6.3 Modified Puls method 35
2.6.4 Muskingum–Cunge method 35
2.6.5 Hydraulic approach 37
2.6.5.1 Solution of the St. Venant equations 37
2.6.5.2 Diffusion wave approximation 38
2.6.5.3 Kinematic wave approximation 38
2.7 Reservoir routing 41
2.8 Rainfall–runoff modelling 43
2.8.1 Conceptual-type hydrologic models 44
2.8.1.1 Stanford watershed model (SWM) 44
2.8.1.2 Tank model 44
2.8.1.3 HEC series 45
2.8.1.4 Xinanjiang model 47
2.8.1.5 Variable infiltration capacity (VIC) model 49
2.8.2 Physics-based hydrologic models 51
2.8.2.1 Système Hydrologique Europèen (SHE) model 51
2.8.3 Data-driven models 52
2.8.3.1 Why data-driven models? 53
2.8.3.2 Types of data-driven models 53
2.9 Guiding principles and criteria for choosing a model 53
2.10 Challenges in hydrological modelling 54
2.11 Concluding remarks 56
References 56
3 Population dynamics 61
3.1 Introduction 61
3.2 Malthusian growth model 61
3.3 Verhulst growth model 63
3.4 Predator–prey (Lotka–Volterra) model 64
3.5 Gompertz curve 65
3.6 Logistic map 66
3.6.1 Specific points in the logistic map 67
3.7 Cell growth 68
3.7.1 Cell division 69
3.7.2 Exponential growth 70
3.7.3 Cell growth models in a batch (closed system) bioreactor 70
Contents vii
© 2010 Taylor & Francis Group, LLC
3.8 Bacterial growth 72
3.8.1 Binary fission 73
3.8.2 Monod kinetics 73
3.9 Radioactive decay and carbon dating 74
3.10 Concluding remarks 75
References 76
4 Reaction kinetics 77
4.1 Introduction 77
4.2 Michaelis–Menten equation 78
4.3 Monod equation 81
4.4 Concluding remarks 84
References 84
5 Water quality systems 85
5.1 Dissolved oxygen systems 85
5.1.1 Biochemical oxygen demand (BOD) 85
5.1.2 Nitrification 88
5.1.3 Denitrification 88
5.1.4 Oxygen depletion equation in a river due
to a single point source of BOD 89
5.1.5 Reoxygenation coefficient 92
5.1.6 Deoxygenation coefficient 94
5.2 Water quality in a completely mixed water body 94
5.2.1 Governing equations for a completely mixed system 95
5.2.2 Step function input 96
5.2.3 Periodic input function 97
5.2.4 Fourier series input 98
5.2.5 General harmonic response 99
5.2.6 Impulse input 101
5.2.7 Arbitrary input 101
5.3 Water quality in rivers and streams 106
5.3.1 Point sources 106
5.3.2 Distributed sources 108
5.3.3 Effect of spatial flow variation 109
5.3.3.1 Exponential spatial flow variation 110
5.3.4 Unsteady state 111
5.3.4.1 Non-dispersive systems 111
5.3.4.2 Dispersive systems 111
5.3.5 Tidal reaches 113
5.3.5.1 Special case of no decay 113
5.3.5.2 Special case of no dispersion 114
5.4 Concluding remarks 114
References 114
viii Contents
© 2010 Taylor & Francis Group, LLC
6 Longitudinal dispersion 117
6.1 Introduction 117
6.2 Governing equations 117
6.2.1 Some characteristics of turbulent diffusion 118
6.2.2 Shear flow dispersion 119
6.2.3 Taylor’s approximation 120
6.2.4 Turbulent mixing coefficients 120
6.3 Dispersion coefficient 121
6.3.1 Routing method 123
6.3.2 Time scale – dimensionless time 124
6.4 Numerical solution 126
6.4.1 Finite difference method 127
6.4.2 Finite element methods 128
6.4.3 Moving finite elements 130
6.5 Dispersion through porous media 131
6.6 General-purpose water quality models 134
6.6.1 Enhanced Stream Water Quality Model (QUAL2E) 134
6.6.2 Water Quality Analysis Simulation Programme (WASP) 135
6.6.3 One Dimensional Riverine Hydrodynamic and
Water Quality Model (EPD-RIV1) 135
6.7 Concluding remarks 136
References 136
7 Time series analysis and forecasting 139
7.1 Introduction 139
7.2 Basic properties of a time series 139
7.2.1 Stationarity 139
7.2.2 Ergodicity 140
7.2.3 Homogeneity 140
7.3 Statistical parameters of a time series 140
7.3.1 Sample moments 140
7.3.2 Moving averages – low-pass filtering 141
7.3.3 Differencing – high-pass filtering 142
7.3.4 Recursive means and variances 142
7.4 Tests for stationarity 143
7.5 Tests for homogeneity 144
7.5.1 von Neumann ratio 145
7.5.2 Cumulative deviations 145
7.5.3 Bayesian statistics 148
7.5.4 Ratio test 148
7.5.5 Pettit test 150
7.6 Components of a time series 151
7.7 Trend analysis 151
7.7.1 Tests for randomness and trend 151
7.7.1.1 Turning point test for randomness 152
Contents ix
© 2010 Taylor & Francis Group, LLC
7.7.1.2 Kendall’s rank correlation test (τ test) 153
7.7.1.3 Regression test for linear trend 154
7.7.1.4 Mann–Kendall test 155
7.7.2 Trend removal 156
7.7.2.1 Splines 157
7.8 Periodicity 159
7.8.1 Harmonic analysis – cumulative periodogram 159
7.8.2 Autocorrelation analysis 164
7.8.3 Spectral analysis 167
7.8.3.1 Hanning method (after J. von Hann) 171
7.8.3.2 Hamming method (after R.W. Hamming, 1983) 171
7.8.3.3 Lag window method (after Tukey, 1965) 172
7.8.4 Cross correlation 173
7.8.5 Cross-spectral density function 173
7.9 Stochastic component 174
7.9.1 Autoregressive (AR) models 175
7.9.1.1 Properties of autoregressive models 175
7.9.1.2 Estimation of parameters 176
7.9.1.3 First-order model (lag-one Markov model) 177
7.9.1.4 Second-order model (lag-two model) 179
7.9.1.5 Partial autocorrelation function (PAF) 180
7.9.2 Moving average (MA) models 181
7.9.2.1 Properties of MA models 182
7.9.2.2 Parameters of MA models 182
7.9.2.3 MA(1) model 183
7.9.2.4 MA(2) model 184
7.9.3 Autoregressive moving average (ARMA) models 185
7.9.3.1 Properties of ARMA(p,q) models 185
7.9.3.2 ARMA(1,1) model 185
7.9.4 Backshift operator 186
7.9.5 Difference operator 187
7.9.6 Autoregressive integrated moving average (ARIMA) models 187
7.10 Residual series 188
7.10.1 Test of independence 188
7.10.2 Test of normality 188
7.10.3 Other distributions 189
7.10.4 Test for parsimony 190
7.10.4.1 Akaike information criterion (AIC) and
Bayesian information criterion (BIC) 190
7.10.4.2 Schwartz Bayesian criterion (SBC) 190
7.11 Forecasting 191
7.11.1 Minimum mean square error type difference equation 191
7.11.2 Confidence limits 193
7.11.3 Forecast errors 193
7.11.4 Numerical examples of forecasting 193
x Contents
© 2010 Taylor & Francis Group, LLC
7.12 Synthetic data generation 196
7.13 ARMAX modelling 197
7.14 Kalman filtering 198
7.15 Parameter estimation 202
7.16 Applications 204
7.17 Concluding remarks 204
Appendix 7.1: Fourier series representation of a periodic function 205
References 207
8 Artificial neural networks 211
8.1 Introduction 211
8.2 Origin of artificial neural networks 212
8.2.1 Biological neuron 212
8.2.2 Artificial neuron 212
8.2.2.1 Bias/threshold 213
8.3 Unconstrained optimization techniques 215
8.3.1 Method of steepest descent 215
8.3.2 Newton’s method (quadratic approximation) 216
8.3.3 Gauss–Newton method 216
8.3.4 LMS algorithm 217
8.4 Perceptron 218
8.4.1 Linear separability 219
8.4.2 ‘AND’, ‘OR’, and ‘XOR’ operations 220
8.4.3 Multilayer perceptron (MLP) 221
8.4.4 Optimal structure of an MLP 222
8.5 Types of activation functions 223
8.5.1 Linear activation function (unbounded) 223
8.5.2 Saturating activation function (bounded) 223
8.5.3 Symmetric saturating activation function (bounded) 228
8.5.4 Positive linear activation function 228
8.5.5 Hardlimiter (Heaviside function; McCulloch–
Pitts model) activation function 229
8.5.6 Symmetric hardlimiter activation function 229
8.5.7 Signum function 229
8.5.8 Triangular activation function 229
8.5.9 Sigmoid logistic activation function 229
8.5.10 Sigmoid hyperbolic tangent function 230
8.5.11 Radial basis functions 230
8.5.11.1 Multiquadratic 230
8.5.11.2 Inverse multiquadratic 231
8.5.11.3 Gaussian 231
8.5.11.4 Polyharmonic spline function 231
8.5.11.5 Thin plate spline function 231
8.5.12 Softmax activation function 231
8.6 Types of artificial neural networks 232
Contents xi
© 2010 Taylor & Francis Group, LLC
8.6.1 Feed-forward neural networks 233
8.6.2 Recurrent neural networks 234
8.6.2.1 Back-propagation through time (BPTT) 235
8.6.3 Self-organizing maps (Kohonen networks) 237
8.6.4 Product unit–based neural networks (PUNN) 239
8.6.4.1 Generation of the initial population 242
8.6.4.2 Fitness function 242
8.6.4.3 Parametric mutation 242
8.6.4.4 Structural mutation 244
8.6.5 Wavelet neural networks 245
8.7 Learning modes and learning 248
8.7.1 Learning modes 248
8.7.2 Types of learning 249
8.7.2.1 Error correction learning (optimum filtering) 249
8.7.2.2 Memory-based learning 249
8.7.2.3 Hebbian learning (Hebb, 1949) (unsupervised) 250
8.7.2.4 Competitive learning (unsupervised) 250
8.7.2.5 Boltzmann learning 251
8.7.2.6 Reinforced learning (unsupervised) 251
8.7.2.7 Hybrid learning 251
8.7.3 Learning rate (η) and momentum term (α) 252
8.8 BP algorithm 252
8.8.1 Generalized delta rule 256
8.9 ANN implementation details 256
8.9.1 Data preprocessing: Principal Component Analysis (PCA) 256
8.9.1.1 Eigenvalue decomposition 259
8.9.1.2 Deriving the new data set 260
8.9.2 Data normalization 260
8.9.3 Choice of input variables 262
8.9.4 Heuristics for implementation of BP 262
8.9.5 Stopping criteria 262
8.9.6 Performance criteria 263
8.10 Feedback Systems 264
8.11 Problems and limitations 265
8.12 Application areas 265
8.12.1 Hydrological applications 265
8.12.1.1 River discharge prediction 266
8.12.2 Environmental applications 276
8.12.2.1 Algal bloom prediction, Hong Kong 276
8.13 Concluding remarks 279
References 279
9 Radial basis function (RBF) neural networks 287
9.1 Introduction 287
9.2 Interpolation 287
xii Contents
© 2010 Taylor & Francis Group, LLC
9.3 Regularization 288
9.4 Generalized RBFs 291
9.5 Normalized radial basis functions (NRBFs) and kernel regression 294
9.6 Learning of RBFs 296
9.6.1 Fixed centres selection (random) 297
9.6.2 Forward selection 298
9.6.3 Orthogonal least squares (OLS) algorithm 298
9.6.3.1 Regularized orthogonal least squares (ROLS) algorithm 302
9.6.4 Self-organized selection of centres 304
9.6.5 Supervised selection of centres 306
9.6.6 Selection of centres using the concept of
generalized degrees of freedom 307
9.6.6.1 Training of RBF networks 308
9.6.6.2 Computational procedure 312
9.6.7 Other methods of learning 313
9.7 Curse of dimensionality 314
9.8 Performance criteria 315
9.9 Comparison of MLP versus RBF networks 315
9.10 Applications 316
9.11 Concluding remarks 318
References 318
10 Fractals and chaos 321
10.1 Introduction 321
10.2 Fractal dimensions 322
10.2.1 Topological dimension 322
10.2.2 Fractal dimension 322
10.2.3 Hausdorff dimension 324
10.2.4 Box-counting dimension 324
10.2.5 Similarity dimension 325
10.2.6 Packing dimension 325
10.2.7 Information dimension 325
10.2.8 Capacity dimension 326
10.2.9 Rényi dimension 326
10.2.10 Correlation dimension 327
10.3 Examples of some well-known fractals 328
10.3.1 Cantor set 328
10.3.2 Sierpinski (gasket) triangle 330
10.3.3 Koch curve 332
10.3.4 Koch snowflake (or Koch star) 333
10.3.5 Mandelbrot set 333
10.3.6 Julia set 335
10.4 Perimeter–area relationship of fractals 335
10.5 Chaos 337
10.5.1 Butterfly effect 337
Contents xiii
© 2010 Taylor & Francis Group, LLC
10.5.2 The n-body problem 338
10.6 Some definitions 339
10.6.1 Metric space 339
10.6.2 Manifold 339
10.6.3 Map 339
10.6.4 Attractor 340
10.6.4.1 Strange attractor 340
10.6.5 Dynamical system 340
10.6.6 Phase (or state) space 341
10.7 Invariants of chaotic systems 341
10.7.1 Lyapunov exponent 341
10.7.2 Entropy of a dynamical system 342
10.7.2.1 Kolmogorov–Sinai (K–S) entropy 343
10.7.2.2 Modified correlation entropy 344
10.7.2.3 K–S entropy and the Lyapunov spectrum 347
10.8 Examples of known chaotic attractors 348
10.8.1 Logistic map 348
10.8.1.1 Bifurcation 351
10.8.2 Hénon map 352
10.8.3 Lorenz map 352
10.8.4 Duffing equation 356
10.8.5 Rössler equations 359
10.8.6 Chua’s equation 360
10.9 Applications areas of chaos 362
10.10 Concluding remarks 362
References 362
11 Dynamical systems approach of modelling 365
11.1 Introduction 365
11.2 Random versus chaotic deterministic systems 366
11.3 Time series as a dynamical system 367
11.3.1 Dynamical system 368
11.3.2 Sensitivity to initial conditions 369
11.4 Embedding 369
11.4.1 Embedding theorem 370
11.4.2 Embedding dimension 372
11.4.2.1 False nearest neighbour (FNN) method 372
11.4.2.2 Singular value decomposition (SVD) 375
11.4.3 Delay time 378
11.4.3.1 Average mutual information 378
11.4.4 Irregular embeddings 379
11.5 Phase (or state) space reconstruction 380
11.6 Phase space prediction 382
11.7 Inverse problem 384
11.7.1 Prediction error 385
11.8 Non-linearity and determinism 386
xiv Contents
© 2010 Taylor & Francis Group, LLC
11.8.1 Test for non-linearity 386
11.8.1.1 Significance 386
11.8.1.2 Test statistics 387
11.8.1.3 Method of surrogate data 387
11.8.1.4 Null hypotheses 388
11.8.2 Test for determinism 389
11.9 Noise and noise reduction 390
11.9.1 Noise in data 390
11.9.2 Noise reduction 392
11.9.3 Noise level 396
11.10 Application areas 401
11.11 Concluding remarks 402
Appendices
Appendix 11.1: Derivation of Equation 11.81 403
Appendix 11.2: Proof of Equation 11.82b 407
Appendix 11.3: Proof of Equation A1-4 407
References 408
12 Support vector machines 413
12.1 Introduction 413
12.2 Linearly separable binary classification 413
12.3 Soft-margin binary classification 418
12.3.1 Linear soft margin 418
12.3.2 Non-linear classification 421
12.4 Support vector regression 424
12.4.1 Linear support vector regression 424
12.4.2 Non-linear support vector regression 426
12.5 Parameter selection 427
12.6 Kernel tricks 427
12.7 Quadratic programming 428
12.8 Limitations and problems 428
12.9 Application areas 428
12.10 Concluding remarks 429
Appendix 12.1: Statistical learning 429
Empirical risk minimization (ERM) 430
Structural risk minimization (SRM) 431
Appendix 12.2: Karush–Kuhn–Tucker (KKT) conditions 432
References 433
13 Fuzzy logic systems 437
13.1 Introduction 437
13.2 Fuzzy sets and fuzzy operations 438
13.2.1 Fuzzy sets 438
13.2.2 Logical operators AND, OR, and NOT 441
Contents xv
© 2010 Taylor & Francis Group, LLC
13.2.2.1 Intersection 441
13.2.2.2 Union 441
13.2.2.3 Other useful definitions 442
13.2.3 Linguistic variables 444
13.3 Membership functions 444
13.3.1 Triangular 444
13.3.2 Trapezoidal 445
13.3.3 Gaussian 446
13.3.4 Asymmetric Gaussian 446
13.3.5 Generalized bell-shaped Gaussian 447
13.3.6 Sigmoidal 447
13.3.7 Singleton 447
13.4 Fuzzy rules 448
13.5 Fuzzy inference 450
13.5.1 Fuzzy or approximate reasoning 450
13.5.2 Mamdani fuzzy inference system 451
13.5.2.1 Fuzzification of inputs 451
13.5.2.2 Application of fuzzy operators ‘AND’ or ‘OR’ 453
13.5.2.3 Implication from antecedent to consequent 453
13.5.2.4 Aggregation of consequents across the rules 456
13.5.2.5 Defuzzification 456
13.5.3 Takagi–Sugeno–Kang (TSK) fuzzy inference system 459
13.5.3.1 Clustering 461
13.5.4 Tsukamoto inference system 463
13.5.5 Larsen inference system 463
13.6 Neuro-fuzzy systems 465
13.6.1 Types of neuro-fuzzy systems 467
13.6.1.1 Umano and Ezawa (1991) fuzzy-neural model 468
13.7 Adaptive neuro-fuzzy inference systems (ANFIS) 469
13.7.1 Hybrid learning 472
13.8 Application areas 472
13.9 Concluding remarks 485
References 485
14 Genetic algorithms (GAs) and genetic programming (GP) 489
14.1 Introduction 489
14.2 Coding 490
14.3 Genetic operators 491
14.4 Parameters of GA 492
14.5 Genetic programming (GP) 492
14.6 Application areas 494
14.7 Concluding remarks 494
References 494