Structured Testing: A Testing . - McCabe Software

2y ago
44 Views
2 Downloads
794.97 KB
124 Pages
Last View : 22d ago
Last Download : 3m ago
Upload by : Camden Erdman
Transcription

NIST Special Publication 500-235Structured Testing: A Testing MethodologyUsing the Cyclomatic Complexity MetricArthur H. WatsonThomas J. McCabePrepared under NISTContract 43NANB517266Dolores R. Wallace, EditorComputer Systems LaboratoryNational Institute of Standards and TechnologyGaithersburg, MD 20899-0001September 1996Founded by Thomas McCabe, McCabe Software has provided Software Quality Managementand Software Change & Configuration Management solutions worldwide for over 30 years.“McCabe IQ” is used to analyze and visualize the security, quality, and testing of mission, life,and business critical applications, utilizing a comprehensive set of advanced software metricsincluding the McCabe-authored Cyclomatic Complexity metric.www.mccabe.com800.638.6316

Reports on Computer Systems TechnologyThe National Institute of Standards and Technology (NIST) has a unique responsibility for computersystems technology within the Federal government. NIST’s Computer Systems Laboratory (CSL)develops standards and guidelines, provides technical assistance, and conducts research for computersand related telecommunications systems to achieve more effective utilization of Federal informationtechnology resources. CSL’s reponsibilities include development of technical, management, physical,and administrative standards and guidelines for the cost-effective security and privacy of sensitiveunclassified information processed in federal computers. CSL assists agencies in developing securityplans and in improving computer security awareness training. This Special Publication 500 seriesreports CSL research and guidelines to Federal agencies as well as to organizations in industry, government, and academia.National Institute of Standards and Technology Special Publication 500-235Natl. Inst. Stand. Technol. Spec. Publ. 500-235, 123 pages (September 1996)

AbstractThe purpose of this document is to describe the structured testing methodology for softwaretesting, also known as basis path testing. Based on the cyclomatic complexity measure ofMcCabe, structured testing uses the control flow structure of software to establish path coverage criteria. The resultant test sets provide more thorough testing than statement and branchcoverage. Extensions of the fundamental structured testing techniques for integration testingand object-oriented systems are also presented. Several related software complexity metricsare described. Summaries of technical papers, case studies, and empirical results are presentedin the appendices.KeywordsBasis path testing, cyclomatic complexity, McCabe, object oriented, software development,software diagnostic, software metrics, software testing, structured testingAcknowledgmentsThe authors acknowledge the contributions by Patricia McQuaid to Appendix A of this report.DisclaimerCertain trade names and company names are mentioned in the text or identified. In no casedoes such identification imply recommendation or endorsement by the National Instituteof Standards and Technology, nor does it imply that the products are necessarily the bestavailable for the purpose.iii

iv

Executive SummaryThis document describes the structured testing methodology for software testing and relatedsoftware complexity analysis techniques. The key requirement of structured testing is that alldecision outcomes must be exercised independently during testing. The number of testsrequired for a software module is equal to the cyclomatic complexity of that module. Theoriginal structured testing document [NBS99] discusses cyclomatic complexity and the basictesting technique. This document gives an expanded and updated presentation of those topics,describes several new complexity measures and testing strategies, and presents the experiencegained through the practical application of these techniques.The software complexity measures described in this document are: cyclomatic complexity,module design complexity, integration complexity, object integration complexity, actual complexity, realizable complexity, essential complexity, and data complexity. The testing techniques are described for module testing, integration testing, and object-oriented testing.A significant amount of practical advice is given concerning the application of these techniques. The use of complexity measurement to manage software reliability and maintainability is discussed, along with strategies to control complexity during maintenance. Methods toapply the testing techniques are also covered. Both manual techniques and the use of automated support are described.Many detailed examples of the techniques are given, as well as summaries of technical papersand case studies. Experimental results are given showing that structured testing is superior tostatement and branch coverage testing for detecting errors. The bibliography lists over 50 references to related information.v

vi

CONTENTSAbstract. iiiKeywords . iiiAcknowledgments. iiiExecutive Summary . v1Introduction . 11.1 Software testing .11.2 Software complexity measurement.21.3 Relationship between complexity and testing.31.4 Document overview and audience descriptions.42Cyclomatic Complexity. 72.1 Control flow graphs.72.2 Definition of cyclomatic complexity, v(G) .102.3 Characterization of v(G) using a basis set of control flow paths .112.4 Example of v(G) and basis paths .132.5 Limiting cyclomatic complexity to 10 .153Examples of Cyclomatic Complexity. 173.1 Independence of complexity and size .173.2 Several flow graphs and their complexity .174Simplified Complexity Calculation . 234.1 Counting predicates .234.2 Counting flow graph regions.284.3 Use of automated tools.295Structured Testing . 315.1 The structured testing criterion .315.2 Intuition behind structured testing .32vii

5.3 Complexity and reliability. 345.4 Structured testing example. 345.5 Weak structured testing . 375.6 Advantages of automation. 385.7 Critical software . 396The Baseline Method .416.1 Generating a basis set of paths . 416.2 The simplified baseline method . 416.3 The baseline method in practice. 426.4 Example of the baseline method . 446.5 Completing testing with the baseline method . 467Integration Testing.477.1 Integration strategies . 477.2 Combining module testing and integration testing . 487.3 Generalization of module testing criteria. 497.4 Module design complexity. 507.5 Integration complexity . 547.6 Incremental integration . 578Testing Object-Oriented Programs .598.1 Benefits and dangers of abstraction . 598.2 Object-oriented module testing . 608.3 Integration testing of object-oriented programs. 618.4 Avoiding unnecessary testing. 659Complexity Reduction .679.1 Actual complexity and realizable complexity. 679.2 Removing control dependencies . 719.3 Trade-offs when reducing complexity . 7410Essential Complexity.7910.1 Structured programming and maintainability . 7910.2 Definition of essential complexity, ev(G) . 8010.3 Examples of essential complexity. 81viii

11Maintenance . 8311.1 Effects of changes on complexity .8311.1.1 Effect of changes on cyclomatic complexity.8311.1.2 Effect of changes on essential complexity.8311.1.3 Incremental reengineering .8411.2 Retesting at the path level .8511.3 Data complexity .8511.4 Reuse of testing information.8612Summary by Lifecycle Process. 8712.1 Design process .8712.2 Coding process.8712.3 Unit testing process.8712.4 Integration testing process .8712.5 Maintenance process.8813References . 89Appendix A Related Case Studies. 95A.1 Myers .95A.2 Schneidewind and Hoffman .95A.3 Walsh.96A.4 Henry, Kafura, and Harris .96A.5 Sheppard and Kruesi .96A.6 Carver.97A.7 Kafura and Reddy .97A.8 Gibson and Senn .98A.9 Ward .99A.10 Caldiera and Basili .99A.11 Gill and Kemerer.99A.12 Schneidewind .100A.13 Case study at Stratus Computer .101A.14 Case study at Digital Equipment Corporation .101A.15 Case study at U.S. Army Missile Command .102A.16 Coleman et al .102A.17 Case study at the AT&T Advanced Software Construction Center .103A.18 Case study at Sterling Software .103A.19 Case Study at GTE Government Systems.103A.20 Case study at Elsag Bailey Process Automation.103A.21 Koshgoftaar et al .104ix

Appendix B Extended Example .105B.1 Testing the blackjack program. 105B.2 Experimental comparison of structured testing and branch coverage. 112x

1 Introduction1.1 Software testingThis document describes the structured testing methodology for software testing. Softwaretesting is the process of executing software and comparing the observed behavior to thedesired behavior. The major goal of software testing is to discover errors in the software[MYERS2], with a secondary goal of building confidence in the proper operation of the software when testing does not discover errors. The conflict between these two goals is apparentwhen considering a testing process that did not detect any errors. In the absence of otherinformation, this could mean either that the software is high quality or that the testing processis low quality. There are many approaches to software testing that attempt to control the quality of the testing process to yield useful information about the quality of the software beingtested.Although most testing research is concentrated on finding effective testing techniques, it isalso important to make software that can be effectively tested. It is suggested in [VOAS] thatsoftware is testable if faults are likely to cause failure, since then those faults are most likely tobe detected by failure during testing. Several programming techniques are suggested to raisetestability, such as minimizing variable reuse and maximizing output parameters. In [BERTOLINO] it is noted that although having faults cause failure is good during testing, it is badafter delivery. For a more intuitive testability property, it is best to maximize the probabilityof faults being detected during testing while minimizing the probability of faults causing failure after delivery. Several programming techniques are suggested to raise testability, including assertions that observe the internal state of the software during testing but do not affect thespecified output, and multiple version development [BRILLIANT] in which any disagreementbetween versions can be reported during testing but a majority voting mechanism helpsreduce the likelihood of incorrect output after delivery. Since both of those techniques are frequently used to help construct reliable systems in practice, this version of testability may capture a significant factor in software development.For large systems, many errors are often found at the beginning of the testing process, with theobserved error rate decreasing as errors are fixed in the software. When the observed errorrate during testing approaches zero, statistical techniques are often used to determine a reasonable point to stop testing [MUSA]. This approach has two significant weaknesses. First, thetesting effort cannot be predicted in advance, since it is a function of the intermediate resultsof the testing effort itself. A related problem is that the testing schedule can expire longbefore the error rate drops to an acceptable level. Second, and perhaps more importantly, thestatistical model only predicts the estimated error rate for the underlying test case distribution1

being used during the testing process. It may have little or no connection to the likelihood oferrors manifesting once the system is delivered or to the total number of errors present in thesoftware.Another common approach to testing is based on requirements analysis. A requirements specification is converted into test cases, which are then executed so that testing verifies system behavior for at least one test case within the scope of each requirement. Although this approach is animportant part of a comprehensive testing effort, it is certainly not a complete solution. Evensetting aside the fact that requirements documents are notoriously error-prone, requirements arewritten at a much higher level of abstraction than code. This means that there is much moredetail in the code than the requirement, so a test case developed from a requirement tends toexercise only a small fraction of the software that implements that requirement. Testing only atthe requirements level may miss many sources of error in the software itself.The structured testing methodology falls into another category, the white box (or code-based, orglass box) testing approach. In white box testing, the software implementation itself is used toguide testing. A common white box testing cri

1.1 Software testing This document describes the structured testing methodology for software testing. Software testing is the process of executing software and comparing the observed behavior to the desired behavior. The major goal of software testing

Related Documents:

PLANO, TX OFFERING SUMMARY Financing Contacts: JARROD MCCABE Director jarrod.mccabe@am.jll.com 469.232.1946 LIGHT INDUSTRIAL BUSINESS CENTER 100% LEASED – 67,570 SF

today, Julie McCabe. She believes that we have all heard someone play the piano and know the duet called “Heart and Soul”. She feels that Ms. McCabe is the heart of the Drug Court Program and Chief Judge White is the soul. They together have initiated a program

Icicle Necklace Holiday Pattern 2012 An original design by Laura McCabe Instructions by Laura McCabe. Illustrations by Melissa Grakowsky-Shippee. Materials: 15º Japanese round seed beads, 10 grams 11º Japanese Cylinder beads, 3 grams 12º Japanese 3-cuts (or 11º Japanese round seed beads), 3 grams 15º Czech charlottes, a pinch

Separation Processes -Maurizio Fermeglia Trieste, 12 April, 2021 - slide 11 Introduction to McCabe Thiele method: feed section Stage 3 is the optimum feed stage. That is, a separation will require the fewest total number of stages when feed stage 3 is used. Note in B and C that if stage 2 or stage 5 is used, more total stages are required.

Lahlaf Geotechnical Consulting, Inc. (LGCI) T: 978/330-5912, E: Madjid.Lahlaf@lgcinc.net Madison McCabe Marketing Associate, RSM US LLP T: 617/241-1143, E: Madison.McCabe@rsmus.com Jennifer Shelby, CPSM Proposal and Communications Manager Architectural Engineers, Inc. T: 617/807-1672, E: jshelby@arcengrs.com Farooq Siddique, PE

Dusten Heugly; attorney for appellee. Gary Buhler; attorney for appellant. This Brief of Appellant is brought to you for free and open access by BYU Law Digital Commons. It has been accepted for inclusion in Utah Court of . action under Utah Rule of Civil Procedure 12(b)(6) for correctness. E.g., Oakwood Vill. LLC v. Albertsons, Inc., 2004 UT .

I do NOT authorize McCabe Vision Center to request health information that is contained in my patient records from another physician. I understand and acknowledge that this may include treatment plans, test results or diagnoses. Signature: _ Today's Date: _ This communication constitutes an electronic communication within the meaning .

ADVANCED ENGINEERING MATHEMATICS By ERWIN KREYSZIG 9TH EDITION This is Downloaded From www.mechanical.tk Visit www.mechanical.tk For More Solution Manuals Hand Books And Much Much More. INSTRUCTOR’S MANUAL FOR ADVANCED ENGINEERING MATHEMATICS imfm.qxd 9/15/05 12:06 PM Page i. imfm.qxd 9/15/05 12:06 PM Page ii. INSTRUCTOR’S MANUAL FOR ADVANCED ENGINEERING MATHEMATICS NINTH EDITION ERWIN .