Siêu thị PDFTải ngay đi em, trời tối mất

Thư viện tri thức trực tuyến

Kho tài liệu với 50,000+ tài liệu học thuật

© 2023 Siêu thị PDF - Kho tài liệu học thuật hàng đầu Việt Nam

A guide to selecting software Measures and metrics
PREMIUM
Số trang
373
Kích thước
2.9 MB
Định dạng
PDF
Lượt xem
708

A guide to selecting software Measures and metrics

Nội dung xem thử

Mô tả chi tiết

A Guide to Selecting

Software Measures

and Metrics

A Guide to Selecting

Software Measures

and Metrics

Capers Jones

A Guide to Selecting

Software Measures

and Metrics

Capers Jones

CRC Press

Taylor & Francis Group

6000 Broken Sound Parkway NW, Suite 300

Boca Raton, FL 33487-2742

© 2017 by Taylor & Francis Group, LLC

CRC Press is an imprint of Taylor & Francis Group, an Informa business

No claim to original U.S. Government works

Printed on acid-free paper

International Standard Book Number-13: 978-1-1380-3307-8 (Hardback)

This book contains information obtained from authentic and highly regarded sources. Reasonable efforts

have been made to publish reliable data and information, but the author and publisher cannot assume

responsibility for the validity of all materials or the consequences of their use. The authors and publishers

have attempted to trace the copyright holders of all material reproduced in this publication and apologize to

copyright holders if permission to publish in this form has not been obtained. If any copyright material has

not been acknowledged please write and let us know so we may rectify in any future reprint.

Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmit￾ted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented,

including photocopying, microfilming, and recording, or in any information storage or retrieval system,

without written permission from the publishers.

For permission to photocopy or use material electronically from this work, please access www.copyright.

com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood

Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that provides licenses and

registration for a variety of users. For organizations that have been granted a photocopy license by the CCC,

a separate system of payment has been arranged.

Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used

only for identification and explanation without intent to infringe.

Visit the Taylor & Francis Web site at

http://www.taylorandfrancis.com

and the CRC Press Web site at

http://www.crcpress.com

v

Contents

Preface ...............................................................................................................vii

Acknowledgments ..............................................................................................xi

About the Author .............................................................................................xiii

1 Introduction ...........................................................................................1

2 Variations in Software Activities by Type of Software .........................17

3 Variations in Software Development Activities by Type of Software .........29

4 Variations in Occupation Groups, Staff Size, Team Experience ...........35

5 Variations due to Inaccurate Software Metrics That Distort Reality .......45

6 Variations in Measuring Agile and CMMI Development ....................51

7 Variations among 60 Development Methodologies ..............................59

8 Variations in Software Programming Languages ................................63

9 Variations in Software Reuse from 0% to 90% .....................................69

10 Variations due to Project, Phase, and Activity Measurements .............77

11 Variations in Burden Rates or Overhead Costs ....................................83

12 Variations in Costs by Industry ............................................................87

13 Variations in Costs by Occupation Group............................................93

14 Variations in Work Habits and Unpaid Overtime ................................97

15 Variations in Functional and Nonfunctional Requirements ................105

vi ◾ Contents

16 Variations in Software Quality Results ..............................................115

Missing Software Defect Data .................................................................116

Software Defect Removal Efficiency ........................................................ 117

Money Spent on Software Bug Removal .................................................. 119

Wasted Time by Software Engineers due to Poor Quality .......................121

Bad Fixes or New Bugs in Bug Repairs ....................................................121

Bad-Test Cases (An Invisible Problem) .....................................................122

Error-Prone Modules with High Numbers of Bugs ..................................122

Limited Scopes of Software Quality Companies ......................................123

Lack of Empirical Data for ISO Quality Standards .................................134

Poor Test Case Design .............................................................................135

Best Software Quality Metrics .................................................................135

Worst Software Quality Metrics ..............................................................136

Why Cost per Defect Distorts Reality .....................................................137

Case A: Poor Quality ..........................................................................137

Case B: Good Quality .........................................................................137

Case C: Zero Defects ..........................................................................137

Be Cautious of Technical Debt ................................................................139

The SEI CMMI Helps Defense Software Quality ....................................139

Software Cost Drivers and Poor Quality .................................................139

Software Quality by Application Size ......................................................140

17 Variations in Pattern-Based Early Sizing ...........................................147

18 Gaps and Errors in When Projects Start. When Do They End? .........157

19 Gaps and Errors in Measuring Software Quality ...............................165

Measuring the Cost of Quality ................................................................179

20 Gaps and Errors due to Multiple Metrics without Conversion Rules ......221

21 Gaps and Errors in Tools, Methodologies, Languages .......................227

Appendix 1: Alphabetical Discussion of Metrics and Measures .................233

Appendix 2: Twenty-Five Software Engineering Targets from 2016

through 2021 ...............................................................................................333

Suggested Readings on Software Measures and Metric Issues ................... 343

Summary and Conclusions on Measures and Metrics ................................349

Index ...........................................................................................................351

vii

Preface

This is my 16th book overall and my second book on software measurement.

My first measurement book was Applied Software Measurement, which was pub￾lished by McGraw-Hill in 1991, had a second edition in 1996, and a third

edition in 2008.

The reason I decided on a new book on measurement instead of the fourth edi￾tion of my older book is that this new book has a different vantage point. The first

book was a kind of tutorial on software measurements with practical advice in getting

started and advice on how to produce useful reports for management and clients.

This new book is not a tutorial on measurement, but rather a critique on a num￾ber of bad measurement practices, hazardous metrics, and huge gaps and omissions

in the software literature that leave major topics uncovered and unexamined. In

fact the completeness of software historical data among more than 100 companies

and 20 government groups is only about 37%.

In my regular professional work, I help clients collect benchmark data. In doing

this, I have noticed major gaps and omissions that need to be corrected if the data

are going to be useful for comparisons or estimating future projects.

Among the more serious gaps are leaks from software effort data that, if not corrected,

will distort reality and make the benchmarks almost useless and possibly even harmful.

One of the most common leaks is that of unpaid overtime. Software is a very

labor-intensive occupation, and many of us work very long hours. But few companies

actually record unpaid overtime. This means that software effort is underreported by

around 15%, which is too large a value to ignore.

Other leaks include the work of part-time specialists who come and go as

needed. There are dozens of these specialists, and their combined effort can top

45% of total software effort on large projects. There are too many to show all of

these specialists, but some of the more common include the following:

1. Agile coaches

2. Architects (software)

3. Architects (systems)

viii ◾ Preface

4. Architects (enterprise?)

5. Assessment specialists

6. Capability maturity model integrated (CMMI) specialists

7. Configuration control specialists

8. Cost estimating specialists

9. Customer support specialists

10. Database administration specialists

11. Education specialists

12. Enterprise resource planning (ERP) specialists

13. Expert-system specialists

14. Function point specialists (certified)

15. Graphics production specialists

16. Human factors specialists

17. Integration specialists

18. Library specialists (for project libraries)

19. Maintenance specialists

20. Marketing specialists

21. Member of the technical staff (multiple specialties)

22. Measurement specialists

23. Metric specialists

24. Project cost analysis specialists

25. Project managers

26. Project office specialists

27. Process improvement specialists

28. Quality assurance specialists

29. Scrum masters

30. Security specialists

31. Technical writing specialists

32. Testing specialists (automated)

33. Testing specialists (manual)

34. Web page design specialists

35. Web masters

Another major leak is that of failing to record the rather high costs for users when

they participate in software projects, such as embedded users for agile projects. But

users also provide requirements, participate in design and phase reviews, perform

acceptance testing, and carry out many other critical activities. User costs can col￾lectively approach 85% of the effort of the actual software development teams.

Without multiplying examples, this new book is somewhat like a medical book

that attempts to discuss treatments for common diseases. This book goes through

a series of measurement and metric problems and explains the damages they can

cause. There are also some suggestions on overcoming these problems, but the main

Preface ◾ ix

focus of the book is to show readers all of the major gaps and problems that need to

be corrected in order to accumulate accurate and useful benchmarks for software

projects. I hope readers will find the information to be of use.

Quality data are even worse than productivity and resource data and are only

about 25% complete. The new technical debt metric is only about 17% complete.

Few companies even start quality measures until after unit test, so all early bugs

found by reviews, desk checks, and static analysis are invisible. Technical debt does

not include consequential damages to clients, nor does it include litigation costs

when clients sue for poor quality.

Hardly anyone measures bad fixes, or new bugs in bug repairs themselves.

About 7% of bug repairs have new bugs, and this can rise above 35% for modules

with high cyclomatic complexity. Even fewer companies measure bad-test cases, or

bugs in test libraries, which average about 15%.

Yet another problem with software measurements has been the continuous

usage for more than 50 years of metrics that distort reality and violate standard

economic principles. The two most flagrant metrics with proven errors are cost per

defect and lines of code (LOC). The cost per defect metric penalizes quality and

makes buggy applications look better than they are. The LOC metric makes

requirements and design invisible and, even worse, penalizes modern high-level

programming languages.

Professional benchmark organizations such as Namcook Analytics, Q/P

Management Group, Davids’ Consulting, and TI Metricas in Brazil that validate

client historical data before logging it can achieve measurement accuracy of perhaps

98%. Contract projects that need accurate billable hours in order to get paid are

often accurate to within 90% for development effort (but many omit unpaid

overtime, and they never record user costs).

Function point metrics are the best choice for both economic and quality

analyses of software projects. The new SNAP metric for software nonfunctional

assessment process measures nonfunctional requirements but is difficult to apply

and also lacks empirical data.

Ordinary internal information system projects and web applications developed

under a cost-center model where costs are absorbed instead of being charged out

are the least accurate and are the ones that average only 37%. Agile projects are

very weak in measurement accuracy and have often less than 50% accuracy. Self￾reported benchmarks are also weak in measurement accuracy and are often less

than 35% in accumulating actual costs.

A distant analogy to this book on measurement problems is Control of

Communicable Diseases in Man, published by the U.S. Public Health Service. It has

concise descriptions of the symptoms and causes of more than 50 common com￾municable diseases, together with discussions of proven effective therapies.

Another medical book with useful guidance for those of us in software is

Paul Starr’s excellent book on The Social Transformation of American Medicine.

x ◾ Preface

This book won a Pulitzer Prize in 1982. Some of the topics on improving medical

records and medical education have much to offer on improving software records

and software education.

So as not to have an entire book filled with problems, Appendix 2 is a more posi￾tive section that shows 25 quantitative goals that could be achieved between now and

2026 if the industry takes measurements seriously and also takes quality seriously.

xi

Acknowledgments

Thanks to my wife, Eileen Jones, for making this book possible. Thanks for her

patience when I get involved in writing and disappear for several hours. Also thanks

for her patience on holidays and vacations when I take my portable computer and

write early in the morning.

Thanks to my neighbor and business partner Ted Maroney, who handles

contracts and the business side of Namcook Analytics LLC, which frees up my time

for books and technical work. Thanks also to Aruna Sankaranarayanan for her excel￾lent work with our Software Risk Master (SRM) estimation tool and our website.

Thanks also to Larry Zevon for the fine work on our blog and to Bob Heffner for

marketing plans. Thanks also to Gary Gack and Jitendra Subramanyam for their

work with us at Namcook.

Thanks to other metrics and measurement research colleagues who also attempt

to bring order into the chaos of software development: Special thanks to the late

Allan Albrecht, the inventor of function points, for his invaluable contribution to

the industry and for his outstanding work. Without Allan’s pioneering work on

function points, the ability to create accurate baselines and benchmarks would

probably not exist today in 2016.

The new SNAP team from International Function Point Users Group (IFPUG)

also deserves thanks: Talmon Ben-Canaan, Carol Dekkers, and Daniel French.

Thanks also to Dr. Alain Abran, Mauricio Aguiar, Dr. Victor Basili, Dr. Barry

Boehm, Dr. Fred Brooks, Manfred Bundschuh, Tom DeMarco, Dr. Reiner Dumke,

Christof Ebert, Gary Gack, Tom Gilb, Scott Goldfarb, Peter Hill, Dr. Steven Kan,

Dr. Leon Kappelman, Dr. Tom McCabe, Dr. Howard Rubin, Dr. Akira Sakakibara,

Manfred Seufort, Paul Strassman, Dr. Gerald Weinberg, Cornelius Wille, the late

Ed Yourdon, and the late Dr. Harlan Mills for their own solid research and for the

excellence and clarity with which they communicated ideas about software. The

software industry is fortunate to have researchers and authors such as these.

Thanks also to the other pioneers of parametric estimation for software projects:

Dr. Barry Boehm of COCOMO, Tony DeMarco and Arlene Minkiewicz of

PRICE, Frank Freiman and Dan Galorath of SEER, Dr. Larry Putnam of SLIM

and the other Putman family members, Dr. Howard Rubin of Estimacs, Dr. Charles

Turk (a colleague at IBM when we built DPS in 1973), and William Roetzheim

xii ◾ Acknowledgments

of ExcelerPlan. Many of us started work on parametric estimation in the 1970s

and brought out our commercial tools in the 1980s.

Thanks to my former colleagues at Software Productivity Research (SPR) for

their hard work on our three commercial estimating tools (SPQR/20 in 1984;

CHECKPOINT in 1987; and KnowledgePlan in 1990): Doug Brindley, Chas

Douglis, Lynn Caramanica, Carol Chiungos, Jane Greene, Rich Ward, Wayne

Hadlock, Debbie Chapman, Mike Cunnane, David Herron, Ed Begley, Chuck

Berlin, Barbara Bloom, Julie Bonaiuto, William Bowen, Michael Bragen, Doug

Brindley, Kristin Brooks, Tom Cagley, Sudip Charkraboty, Craig Chamberlin,

Michael Cunnane, Charlie Duczakowski, Gail Flaherty, Richard Gazoorian,

James Glorie, Scott Goldfarb, David Gustafson, Bill Harmon, Shane Hartman,

Bob Haven, Steve Hone, Jan Huffman, Peter Katsoulas, Richard Kauffold, Scott

Moody, John Mulcahy, Phyllis Nissen, Jacob Okyne, Donna O’Donnel, Mark

Pinis, Tom Riesmeyer, Janet Russac, Cres Smith, John Smith, Judy Sommers, Bill

Walsh, and John Zimmerman. Thanks also to Ajit Maira and Dick Spann for their

service on SPR’s board of directors.

Appreciation is also due to various corporate executives who supported the

technical side of measurement and metrics by providing time and funding. From

IBM, the late Ted Climis and the late Jim Frame both supported the author’s mea￾surement work and in fact commissioned several studies of productivity and quality

inside IBM as well as funding IBM’s first parametric estimation tool in 1973. Rand

Araskog and Dr. Charles Herzfeld at ITT also provided funds for metrics studies,

as did Jim Frame who became the first ITT VP of software.

Thanks are also due to the officers and employees of the IFPUG. This organi￾zation started almost 30 years ago in 1986 and has grown to become the largest

software measurement association in the history of software. When the affiliates in

other countries are included, the community of function point users is the largest

measurement association in the world.

There are other function point associations such as Common Software

Measurement International Consortium, Finnish Software Metrics Association,

and Netherlands Software Metrics Association, but all 16 of my software books

have used IFPUG function points. This is in part due to the fact that Al Albrecht

and I worked together at IBM and later at Software Productivity Research.

xiii

About the Author

Capers Jones is currently the vice president and chief technology officer of Namcook

Analytics LLC (www.Namcook.com). Namcook Analytic LLC designs leading￾edge risk, cost, and quality estimation and measurement tools. Software Risk

Master (SRM)™ is the company’s advanced estimation tool with a patent-pending

early sizing feature that allows sizing before requirements via pattern matching.

Namcook Analytics also collects software benchmark data and engages in longer

range software process improvement, quality, and risk-assessment studies. These

Namcook studies are global and involve major corporations and some government

agencies in many countries in Europe, Asia, and South America. Capers Jones is

the author of 15 software books and several hundred journal articles. He is also an

invited keynote speaker at many software conferences in the United States, Europe,

and the Pacific Rim.

Tải ngay đi em, còn do dự, trời tối mất!