Siêu thị PDFTải ngay đi em, trời tối mất

Thư viện tri thức trực tuyến

Kho tài liệu với 50,000+ tài liệu học thuật

© 2023 Siêu thị PDF - Kho tài liệu học thuật hàng đầu Việt Nam

Data Mining and Analysis
PREMIUM
Số trang
660
Kích thước
9.9 MB
Định dạng
PDF
Lượt xem
1097

Data Mining and Analysis

Nội dung xem thử

Mô tả chi tiết

Data Mining and Analysis:

Fundamental Concepts and Algorithms

Mohammed J. Zaki

Wagner Meira Jr.

CONTENTS i

Contents

Preface 1

1 Data Mining and Analysis 4

1.1 Data Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.2 Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

1.3 Data: Algebraic and Geometric View . . . . . . . . . . . . . . . . . . 7

1.3.1 Distance and Angle . . . . . . . . . . . . . . . . . . . . . . . . 9

1.3.2 Mean and Total Variance . . . . . . . . . . . . . . . . . . . . 13

1.3.3 Orthogonal Projection . . . . . . . . . . . . . . . . . . . . . . 14

1.3.4 Linear Independence and Dimensionality . . . . . . . . . . . . 15

1.4 Data: Probabilistic View . . . . . . . . . . . . . . . . . . . . . . . . . 17

1.4.1 Bivariate Random Variables . . . . . . . . . . . . . . . . . . . 24

1.4.2 Multivariate Random Variable . . . . . . . . . . . . . . . . . 28

1.4.3 Random Sample and Statistics . . . . . . . . . . . . . . . . . 29

1.5 Data Mining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

1.5.1 Exploratory Data Analysis . . . . . . . . . . . . . . . . . . . . 31

1.5.2 Frequent Pattern Mining . . . . . . . . . . . . . . . . . . . . . 33

1.5.3 Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

1.5.4 Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

1.6 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

1.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

I Data Analysis Foundations 37

2 Numeric Attributes 38

2.1 Univariate Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

2.1.1 Measures of Central Tendency . . . . . . . . . . . . . . . . . . 39

2.1.2 Measures of Dispersion . . . . . . . . . . . . . . . . . . . . . . 43

2.2 Bivariate Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

2.2.1 Measures of Location and Dispersion . . . . . . . . . . . . . . 49

2.2.2 Measures of Association . . . . . . . . . . . . . . . . . . . . . 50

CONTENTS ii

2.3 Multivariate Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

2.4 Data Normalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

2.5 Normal Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

2.5.1 Univariate Normal Distribution . . . . . . . . . . . . . . . . . 61

2.5.2 Multivariate Normal Distribution . . . . . . . . . . . . . . . . 63

2.6 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

2.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

3 Categorical Attributes 71

3.1 Univariate Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

3.1.1 Bernoulli Variable . . . . . . . . . . . . . . . . . . . . . . . . 71

3.1.2 Multivariate Bernoulli Variable . . . . . . . . . . . . . . . . . 74

3.2 Bivariate Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

3.2.1 Attribute Dependence: Contingency Analysis . . . . . . . . . 88

3.3 Multivariate Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

3.3.1 Multi-way Contingency Analysis . . . . . . . . . . . . . . . . 95

3.4 Distance and Angle . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98

3.5 Discretization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100

3.6 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102

3.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103

4 Graph Data 105

4.1 Graph Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

4.2 Topological Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . 110

4.3 Centrality Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

4.3.1 Basic Centralities . . . . . . . . . . . . . . . . . . . . . . . . . 115

4.3.2 Web Centralities . . . . . . . . . . . . . . . . . . . . . . . . . 117

4.4 Graph Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126

4.4.1 Erdös-Rényi Random Graph Model . . . . . . . . . . . . . . . 129

4.4.2 Watts-Strogatz Small-world Graph Model . . . . . . . . . . . 133

4.4.3 Barabási-Albert Scale-free Model . . . . . . . . . . . . . . . . 139

4.5 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147

4.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148

5 Kernel Methods 150

5.1 Kernel Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155

5.1.1 Reproducing Kernel Map . . . . . . . . . . . . . . . . . . . . 156

5.1.2 Mercer Kernel Map . . . . . . . . . . . . . . . . . . . . . . . . 158

5.2 Vector Kernels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161

5.3 Basic Kernel Operations in Feature Space . . . . . . . . . . . . . . . 166

5.4 Kernels for Complex Objects . . . . . . . . . . . . . . . . . . . . . . 173

5.4.1 Spectrum Kernel for Strings . . . . . . . . . . . . . . . . . . . 173

5.4.2 Diffusion Kernels on Graph Nodes . . . . . . . . . . . . . . . 175

CONTENTS iii

5.5 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180

5.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180

6 High-Dimensional Data 182

6.1 High-Dimensional Objects . . . . . . . . . . . . . . . . . . . . . . . . 182

6.2 High-Dimensional Volumes . . . . . . . . . . . . . . . . . . . . . . . . 184

6.3 Hypersphere Inscribed within Hypercube . . . . . . . . . . . . . . . . 187

6.4 Volume of Thin Hypersphere Shell . . . . . . . . . . . . . . . . . . . 189

6.5 Diagonals in Hyperspace . . . . . . . . . . . . . . . . . . . . . . . . . 190

6.6 Density of the Multivariate Normal . . . . . . . . . . . . . . . . . . . 191

6.7 Appendix: Derivation of Hypersphere Volume . . . . . . . . . . . . . 195

6.8 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200

6.9 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200

7 Dimensionality Reduction 204

7.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204

7.2 Principal Component Analysis . . . . . . . . . . . . . . . . . . . . . . 209

7.2.1 Best Line Approximation . . . . . . . . . . . . . . . . . . . . 209

7.2.2 Best Two-dimensional Approximation . . . . . . . . . . . . . 213

7.2.3 Best r-dimensional Approximation . . . . . . . . . . . . . . . 217

7.2.4 Geometry of PCA . . . . . . . . . . . . . . . . . . . . . . . . 222

7.3 Kernel Principal Component Analysis (Kernel PCA) . . . . . . . . . 225

7.4 Singular Value Decomposition . . . . . . . . . . . . . . . . . . . . . . 233

7.4.1 Geometry of SVD . . . . . . . . . . . . . . . . . . . . . . . . 234

7.4.2 Connection between SVD and PCA . . . . . . . . . . . . . . . 235

7.5 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237

7.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238

II Frequent Pattern Mining 240

8 Itemset Mining 241

8.1 Frequent Itemsets and Association Rules . . . . . . . . . . . . . . . . 241

8.2 Itemset Mining Algorithms . . . . . . . . . . . . . . . . . . . . . . . 245

8.2.1 Level-Wise Approach: Apriori Algorithm . . . . . . . . . . . 247

8.2.2 Tidset Intersection Approach: Eclat Algorithm . . . . . . . . 250

8.2.3 Frequent Pattern Tree Approach: FPGrowth Algorithm . . . 256

8.3 Generating Association Rules . . . . . . . . . . . . . . . . . . . . . . 260

8.4 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263

8.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263

CONTENTS iv

9 Summarizing Itemsets 269

9.1 Maximal and Closed Frequent Itemsets . . . . . . . . . . . . . . . . . 269

9.2 Mining Maximal Frequent Itemsets: GenMax Algorithm . . . . . . . 273

9.3 Mining Closed Frequent Itemsets: Charm algorithm . . . . . . . . . 275

9.4 Non-Derivable Itemsets . . . . . . . . . . . . . . . . . . . . . . . . . . 278

9.5 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284

9.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285

10 Sequence Mining 289

10.1 Frequent Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289

10.2 Mining Frequent Sequences . . . . . . . . . . . . . . . . . . . . . . . 290

10.2.1 Level-Wise Mining: GSP . . . . . . . . . . . . . . . . . . . . . 292

10.2.2 Vertical Sequence Mining: SPADE . . . . . . . . . . . . . . . 293

10.2.3 Projection-Based Sequence Mining: PrefixSpan . . . . . . . . 296

10.3 Substring Mining via Suffix Trees . . . . . . . . . . . . . . . . . . . . 298

10.3.1 Suffix Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298

10.3.2 Ukkonen’s Linear Time Algorithm . . . . . . . . . . . . . . . 301

10.4 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309

10.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309

11 Graph Pattern Mining 314

11.1 Isomorphism and Support . . . . . . . . . . . . . . . . . . . . . . . . 314

11.2 Candidate Generation . . . . . . . . . . . . . . . . . . . . . . . . . . 318

11.2.1 Canonical Code . . . . . . . . . . . . . . . . . . . . . . . . . . 320

11.3 The gSpan Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 323

11.3.1 Extension and Support Computation . . . . . . . . . . . . . . 326

11.3.2 Canonicality Checking . . . . . . . . . . . . . . . . . . . . . . 330

11.4 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331

11.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333

12 Pattern and Rule Assessment 337

12.1 Rule and Pattern Assessment Measures . . . . . . . . . . . . . . . . 337

12.1.1 Rule Assessment Measures . . . . . . . . . . . . . . . . . . . . 338

12.1.2 Pattern Assessment Measures . . . . . . . . . . . . . . . . . . 346

12.1.3 Comparing Multiple Rules and Patterns . . . . . . . . . . . . 349

12.2 Significance Testing and Confidence Intervals . . . . . . . . . . . . . 354

12.2.1 Fisher Exact Test for Productive Rules . . . . . . . . . . . . . 354

12.2.2 Permutation Test for Significance . . . . . . . . . . . . . . . . 359

12.2.3 Bootstrap Sampling for Confidence Interval . . . . . . . . . . 364

12.3 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367

12.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368

CONTENTS v

III Clustering 370

13 Representative-based Clustering 371

13.1 K-means Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372

13.2 Kernel K-means . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375

13.3 Expectation Maximization (EM) Clustering . . . . . . . . . . . . . . 381

13.3.1 EM in One Dimension . . . . . . . . . . . . . . . . . . . . . . 383

13.3.2 EM in d-Dimensions . . . . . . . . . . . . . . . . . . . . . . . 386

13.3.3 Maximum Likelihood Estimation . . . . . . . . . . . . . . . . 393

13.3.4 Expectation-Maximization Approach . . . . . . . . . . . . . . 397

13.4 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400

13.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401

14 Hierarchical Clustering 404

14.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404

14.2 Agglomerative Hierarchical Clustering . . . . . . . . . . . . . . . . . 407

14.2.1 Distance between Clusters . . . . . . . . . . . . . . . . . . . . 407

14.2.2 Updating Distance Matrix . . . . . . . . . . . . . . . . . . . . 411

14.2.3 Computational Complexity . . . . . . . . . . . . . . . . . . . 413

14.3 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413

14.4 Exercises and Projects . . . . . . . . . . . . . . . . . . . . . . . . . . 414

15 Density-based Clustering 417

15.1 The DBSCAN Algorithm . . . . . . . . . . . . . . . . . . . . . . . . 418

15.2 Kernel Density Estimation . . . . . . . . . . . . . . . . . . . . . . . . 421

15.2.1 Univariate Density Estimation . . . . . . . . . . . . . . . . . 422

15.2.2 Multivariate Density Estimation . . . . . . . . . . . . . . . . 424

15.2.3 Nearest Neighbor Density Estimation . . . . . . . . . . . . . . 427

15.3 Density-based Clustering: DENCLUE . . . . . . . . . . . . . . . . . 428

15.4 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434

15.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434

16 Spectral and Graph Clustering 438

16.1 Graphs and Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . 438

16.2 Clustering as Graph Cuts . . . . . . . . . . . . . . . . . . . . . . . . 446

16.2.1 Clustering Objective Functions: Ratio and Normalized Cut . 448

16.2.2 Spectral Clustering Algorithm . . . . . . . . . . . . . . . . . . 451

16.2.3 Maximization Objectives: Average Cut and Modularity . . . . 455

16.3 Markov Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463

16.4 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 470

16.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471

CONTENTS vi

17 Clustering Validation 473

17.1 External Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474

17.1.1 Matching Based Measures . . . . . . . . . . . . . . . . . . . . 474

17.1.2 Entropy Based Measures . . . . . . . . . . . . . . . . . . . . . 479

17.1.3 Pair-wise Measures . . . . . . . . . . . . . . . . . . . . . . . . 482

17.1.4 Correlation Measures . . . . . . . . . . . . . . . . . . . . . . . 486

17.2 Internal Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489

17.3 Relative Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498

17.3.1 Cluster Stability . . . . . . . . . . . . . . . . . . . . . . . . . 505

17.3.2 Clustering Tendency . . . . . . . . . . . . . . . . . . . . . . . 508

17.4 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513

17.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514

IV Classification 516

18 Probabilistic Classification 517

18.1 Bayes Classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517

18.1.1 Estimating the Prior Probability . . . . . . . . . . . . . . . . 518

18.1.2 Estimating the Likelihood . . . . . . . . . . . . . . . . . . . . 518

18.2 Naive Bayes Classifier . . . . . . . . . . . . . . . . . . . . . . . . . . 524

18.3 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 528

18.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 528

19 Decision Tree Classifier 530

19.1 Decision Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532

19.2 Decision Tree Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 535

19.2.1 Split-point Evaluation Measures . . . . . . . . . . . . . . . . 536

19.2.2 Evaluating Split-points . . . . . . . . . . . . . . . . . . . . . . 537

19.2.3 Computational Complexity . . . . . . . . . . . . . . . . . . . 545

19.3 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 546

19.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547

20 Linear Discriminant Analysis 549

20.1 Optimal Linear Discriminant . . . . . . . . . . . . . . . . . . . . . . 549

20.2 Kernel Discriminant Analysis . . . . . . . . . . . . . . . . . . . . . . 556

20.3 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 564

20.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 564

21 Support Vector Machines 566

21.1 Linear Discriminants and Margins . . . . . . . . . . . . . . . . . . . 566

21.2 SVM: Linear and Separable Case . . . . . . . . . . . . . . . . . . . . 572

21.3 Soft Margin SVM: Linear and Non-Separable Case . . . . . . . . . . 577

CONTENTS vii

21.3.1 Hinge Loss . . . . . . . . . . . . . . . . . . . . . . . . . . . . 578

21.3.2 Quadratic Loss . . . . . . . . . . . . . . . . . . . . . . . . . . 582

21.4 Kernel SVM: Nonlinear Case . . . . . . . . . . . . . . . . . . . . . . 583

21.5 SVM Training Algorithms . . . . . . . . . . . . . . . . . . . . . . . . 588

21.5.1 Dual Solution: Stochastic Gradient Ascent . . . . . . . . . . . 588

21.5.2 Primal Solution: Newton Optimization . . . . . . . . . . . . . 593

22 Classification Assessment 602

22.1 Classification Performance Measures . . . . . . . . . . . . . . . . . . 602

22.1.1 Contingency Table Based Measures . . . . . . . . . . . . . . . 604

22.1.2 Binary Classification: Positive and Negative Class . . . . . . 607

22.1.3 ROC Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 611

22.2 Classifier Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . 616

22.2.1 K-fold Cross-Validation . . . . . . . . . . . . . . . . . . . . . 617

22.2.2 Bootstrap Resampling . . . . . . . . . . . . . . . . . . . . . . 618

22.2.3 Confidence Intervals . . . . . . . . . . . . . . . . . . . . . . . 620

22.2.4 Comparing Classifiers: Paired t-Test . . . . . . . . . . . . . . 625

22.3 Bias-Variance Decomposition . . . . . . . . . . . . . . . . . . . . . . 627

22.3.1 Ensemble Classifiers . . . . . . . . . . . . . . . . . . . . . . . 632

22.4 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 638

22.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 639

Index 641

CONTENTS 1

Preface

This book is an outgrowth of data mining courses at RPI and UFMG; the RPI course

has been offered every Fall since 1998, whereas the UFMG course has been offered

since 2002. While there are several good books on data mining and related topics,

we felt that many of them are either too high-level or too advanced. Our goal was

to write an introductory text which focuses on the fundamental algorithms in data

mining and analysis. It lays the mathematical foundations for the core data mining

methods, with key concepts explained when first encountered; the book also tries to

build the intuition behind the formulas to aid understanding.

The main parts of the book include exploratory data analysis, frequent pattern

mining, clustering and classification. The book lays the basic foundations of these

tasks, and it also covers cutting edge topics like kernel methods, high dimensional

data analysis, and complex graphs and networks. It integrates concepts from related

disciplines like machine learning and statistics, and is also ideal for a course on data

analysis. Most of the prerequisite material is covered in the text, especially on linear

algebra, and probability and statistics.

The book includes many examples to illustrate the main technical concepts. It

also has end of chapter exercises, which have been used in class. All of the algorithms

in the book have been implemented by the authors. We suggest that the reader use

their favorite data analysis and mining software to work through our examples, and

to implement the algorithms we describe in text; we recommend the R software,

or the Python language with its NumPy package. The datasets used and other

supplementary material like project ideas, slides, and so on, are available online at

the book’s companion site and its mirrors at RPI and UFMG

• http://dataminingbook.info

• http://www.cs.rpi.edu/~zaki/dataminingbook

• http://www.dcc.ufmg.br/dataminingbook

Having understood the basic principles and algorithms in data mining and data

analysis, the readers will be well equipped to develop their own methods or use more

advanced techniques.

CONTENTS 2

Suggested Roadmaps

The chapter dependency graph is shown in Figure 1. We suggest some typical

roadmaps for courses and readings based on this book. For an undergraduate level

course, we suggest the following chapters: 1-3, 8, 10, 12-15, 17-19, and 21-22. For an

undergraduate course without exploratory data analysis, we recommend Chapters

1, 8-15, 17-19, and 21-22. For a graduate course, one possibility is to quickly go

over the material in Part I, or to assume it as background reading and to directly

cover Chapters 9-23; the other parts of the book, namely frequent pattern mining

(Part II), clustering (Part III), and classification (Part IV) can be covered in any

order. For a course on data analysis the chapters must include 1-7, 13-14, 15 (Section

2), and 20. Finally, for a course with an emphasis on graphs and kernels we suggest

Chapters 4, 5, 7 (Sections 1-3), 11-12, 13 (Sections 1-2), 16-17, 20-22.

1

2

14 6 7 15 5

13

17

16 20

22

21

4 19

3

18 8

11

12

9 10

Figure 1: Chapter Dependencies

Acknowledgments

Initial drafts of this book have been used in many data mining courses. We received

many valuable comments and corrections from both the faculty and students. Our

thanks go to

• Muhammad Abulaish, Jamia Millia Islamia, India

CONTENTS 3

• Mohammad Al Hasan, Indiana University Purdue University at Indianapolis

• Marcio Luiz Bunte de Carvalho, Universidade Federal de Minas Gerais, Brazil

• Loïc Cerf, Universidade Federal de Minas Gerais, Brazil

• Ayhan Demiriz, Sakarya University, Turkey

• Murat Dundar, Indiana University Purdue University at Indianapolis

• Jun Luke Huan, University of Kansas

• Ruoming Jin, Kent State University

• Latifur Khan, University of Texas, Dallas

• Pauli Miettinen, Max-Planck-Institut für Informatik, Germany

• Suat Ozdemir, Gazi University, Turkey

• Naren Ramakrishnan, Virginia Polytechnic and State University

• Leonardo Chaves Dutra da Rocha, Universidade Federal de São João del-Rei,

Brazil

• Saeed Salem, North Dakota State University

• Ankur Teredesai, University of Washington, Tacoma

• Hannu Toivonen, University of Helsinki, Finland

• Adriano Alonso Veloso, Universidade Federal de Minas Gerais, Brazil

• Jason T.L. Wang, New Jersey Institute of Technology

• Jianyong Wang, Tsinghua University, China

• Jiong Yang, Case Western Reserve University

• Jieping Ye, Arizona State University

We would like to thank all the students enrolled in our data mining courses at RPI

and UFMG, and also the anonymous reviewers who provided technical comments

on various chapters. In addition, we thank CNPq, CAPES, FAPEMIG, Inweb –

the National Institute of Science and Technology for the Web, and Brazil’s Science

without Borders program for their support. We thank Lauren Cowles, our editor at

Cambridge University Press, for her guidance and patience in realizing this book.

Finally, on a more personal front, MJZ would like to dedicate the book to Amina,

Abrar, Afsah, and his parents, and WMJ would like to dedicate the book to Patricia,

Gabriel, Marina and his parents, Wagner and Marlene. This book would not have

been possible without their patience and support.

Troy Mohammed J. Zaki

Belo Horizonte Wagner Meira, Jr.

Summer 2013

CHAPTER 1. DATA MINING AND ANALYSIS 4

Chapter 1

Data Mining and Analysis

Data mining is the process of discovering insightful, interesting, and novel patterns,

as well as descriptive, understandable and predictive models from large-scale data.

We begin this chapter by looking at basic properties of data modeled as a data ma￾trix. We emphasize the geometric and algebraic views, as well as the probabilistic

interpretation of data. We then discuss the main data mining tasks, which span ex￾ploratory data analysis, frequent pattern mining, clustering and classification, laying

out the road-map for the book.

1.1 Data Matrix

Data can often be represented or abstracted as an n×d data matrix, with n rows and

d columns, where rows correspond to entities in the dataset, and columns represent

attributes or properties of interest. Each row in the data matrix records the observed

attribute values for a given entity. The n × d data matrix is given as

D =



X1 X2 · · · Xd

x1 x11 x12 · · · x1d

x2 x21 x22 · · · x2d

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

xn xn1 xn2 · · · xnd



where xi denotes the i-th row, which is a d-tuple given as

xi = (xi1, xi2, · · · , xid)

and where Xj denotes the j-th column, which is an n-tuple given as

Xj = (x1j , x2j , · · · , xnj )

Depending on the application domain, rows may also be referred to as entities,

instances, examples, records, transactions, objects, points, feature-vectors, tuples and

CHAPTER 1. DATA MINING AND ANALYSIS 5

so on. Likewise, columns may also be called attributes, properties, features, dimen￾sions, variables, fields, and so on. The number of instances n is referred to as the

size of the data, whereas the number of attributes d is called the dimensionality of

the data. The analysis of a single attribute is referred to as univariate analysis,

whereas the simultaneous analysis of two attributes is called bivariate analysis and

the simultaneous analysis of more than two attributes is called multivariate analysis.



sepal sepal petal petal class length width length width

X1 X2 X3 X4 X5

x1 5.9 3.0 4.2 1.5 Iris-versicolor

x2 6.9 3.1 4.9 1.5 Iris-versicolor

x3 6.6 2.9 4.6 1.3 Iris-versicolor

x4 4.6 3.2 1.4 0.2 Iris-setosa

x5 6.0 2.2 4.0 1.0 Iris-versicolor

x6 4.7 3.2 1.3 0.2 Iris-setosa

x7 6.5 3.0 5.8 2.2 Iris-virginica

x8 5.8 2.7 5.1 1.9 Iris-virginica

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

x149 7.7 3.8 6.7 2.2 Iris-virginica

x150 5.1 3.4 1.5 0.2 Iris-setosa



Table 1.1: Extract from the Iris Dataset

Example 1.1: Table 1.1 shows an extract of the Iris dataset; the complete data

forms a 150×5 data matrix. Each entity is an Iris flower, and the attributes include

sepal length, sepal width, petal length and petal width in centimeters, and

the type or class of the Iris flower. The first row is given as the 5-tuple

x1 = (5.9, 3.0, 4.2, 1.5, Iris-versicolor)

Not all datasets are in the form of a data matrix. For instance, more complex

datasets can be in the form of sequences (e.g., DNA, Proteins), text, time-series,

images, audio, video, and so on, which may need special techniques for analysis.

However, in many cases even if the raw data is not a data matrix it can usually be

transformed into that form via feature extraction. For example, given a database of

images, we can create a data matrix where rows represent images and columns corre￾spond to image features like color, texture, and so on. Sometimes, certain attributes

may have special semantics associated with them requiring special treatment. For

CHAPTER 1. DATA MINING AND ANALYSIS 6

instance, temporal or spatial attributes are often treated differently. It is also worth

noting that traditional data analysis assumes that each entity or instance is inde￾pendent. However, given the interconnected nature of the world we live in, this

assumption may not always hold. Instances may be connected to other instances via

various kinds of relationships, giving rise to a data graph, where a node represents

an entity and an edge represents the relationship between two entities.

1.2 Attributes

Attributes may be classified into two main types depending on their domain, i.e.,

depending on the types of values they take on.

Numeric Attributes A numeric attribute is one that has a real-valued or integer￾valued domain. For example, Age with domain(Age) = N, where N denotes the set

of natural numbers (non-negative integers), is numeric, and so is petal length in

Table 1.1, with domain(petal length) = R

+ (the set of all positive real numbers).

Numeric attributes that take on a finite or countably infinite set of values are called

discrete, whereas those that can take on any real value are called continuous. As a

special case of discrete, if an attribute has as its domain the set {0, 1}, it is called a

binary attribute. Numeric attributes can be further classified into two types:

• Interval-scaled: For these kinds of attributes only differences (addition or sub￾traction) make sense. For example, attribute temperature measured in ◦C or

◦F is interval-scaled. If it is 20 ◦C on one day and 10 ◦C on the following

day, it is meaningful to talk about a temperature drop of 10 ◦C, but it is not

meaningful to say that it is twice as cold as the previous day.

• Ratio-scaled: Here one can compute both differences as well as ratios between

values. For example, for attribute Age, we can say that someone who is 20

years old is twice as old as someone who is 10 years old.

Categorical Attributes A categorical attribute is one that has a set-valued do￾main composed of a set of symbols. For example, Sex and Education could be

categorical attributes with their domains given as

domain(Sex) = {M, F}

domain(Education) = {HighSchool, BS, MS, PhD}

Categorical attributes may be of two types:

• Nominal: The attribute values in the domain are unordered, and thus only

equality comparisons are meaningful. That is, we can check only whether the

value of the attribute for two given instances is the same or not. For example,

CHAPTER 1. DATA MINING AND ANALYSIS 7

Sex is a nominal attribute. Also class in Table 1.1 is a nominal attribute with

domain(class) = {iris-setosa, iris-versicolor, iris-virginica}.

• Ordinal: The attribute values are ordered, and thus both equality comparisons

(is one value equal to another) and inequality comparisons (is one value less

than or greater than another) are allowed, though it may not be possible to

quantify the difference between values. For example, Education is an ordi￾nal attribute, since its domain values are ordered by increasing educational

qualification.

1.3 Data: Algebraic and Geometric View

If the d attributes or dimensions in the data matrix D are all numeric, then each

row can be considered as a d-dimensional point

xi = (xi1, xi2, · · · , xid) ∈ R

d

or equivalently, each row may be considered as a d-dimensional column vector (all

vectors are assumed to be column vectors by default)

xi =



xi1

xi2

.

.

.

xid



=

xi1 xi2 · · · xidT

∈ R

d

where T is the matrix transpose operator.

The d-dimensional Cartesian coordinate space is specified via the d unit vectors,

called the standard basis vectors, along each of the axes. The j-th standard basis

vector ej is the d-dimensional unit vector whose j-th component is 1 and the rest of

the components are 0

ej = (0, . . . , 1j , . . . , 0)T

Any other vector in R

d

can be written as linear combination of the standard basis

vectors. For example, each of the points xi can be written as the linear combination

xi = xi1e1 + xi2e2 + · · · + xided =

X

d

j=1

xijej

where the scalar value xij is the coordinate value along the j-th axis or attribute

Tải ngay đi em, còn do dự, trời tối mất!