Automatic Identification Of Bug-Introducing Changes

1y ago
23 Views
3 Downloads
1.15 MB
10 Pages
Last View : 27d ago
Last Download : 3m ago
Upload by : Isobel Thacker
Transcription

Automatic Identification of Bug-Introducing ChangesSunghun Kim1, Thomas Zimmermann2, Kai Pan1, E. James Whitehead, Jr.11University of California,Santa Cruz, CA, USA{hunkim, pankai, ejw}@cs.ucsc.eduAbstractBug-fixes are widely used for predicting bugs orfinding risky parts of software. However, a bug-fix doesnot contain information about the change that initiallyintroduced a bug. Such bug-introducing changes can helpidentify important properties of software bugs such ascorrelated factors or causalities. For example, they revealwhich developers or what kinds of source code changesintroduce more bugs. In contrast to bug-fixes that arerelatively easy to obtain, the extraction of bugintroducing changes is challenging.In this paper, we present algorithms to automaticallyand accurately identify bug-introducing changes. Weremove false positives and false negatives by usingannotation graphs, by ignoring non-semantic source codechanges, and outlier fixes. Additionally, we validated thatthe fixes we used are true fixes by a manual inspection.Altogether, our algorithms can remove about 38% 51%of false positives and 14% 15% of false negativescompared to the previous algorithm. Finally, we showapplications of bug-introducing changes that demonstratetheir value for research.1. IntroductionToday, software bugs remain a constant and costlyfixture of industrial and open source softwaredevelopment. To manage the flow of bugs, softwareprojects carefully control their changes using softwareconfiguration management (SCM) systems, capture bugreports using bug tracking software (such as Bugzilla),and then record which change in the SCM system fixes aspecific bug in the change tracking system.The progression of a single bug is as follows. Aprogrammer makes a change to a software system, eitherto add new functionality, restructure the code, or to repairan existing bug. In the process of making this change,they inadvertently introduce a bug into the software. Wecall this a bug-introducing change, the modification inwhich a bug was injected into the software. At some latertime, this bug manifests itself in some undesired externalbehavior, which is recorded in a bug tracking system.Subsequently, a developer modifies the project’s sourcecode, possibly changing multiple files, and repairs thebug. They commit this change to the SCM system,21st IEEE International Conference on Automated Software Engineering (ASE'06)0-7695-2579-2/06 20.00 20062Saarland University,Saarbrücken, Germanytz@acm.orgpermanently recording the change. As part of the commit,developers commonly (but not always) record in the SCMsystem change log the identifier of the bug report that wasjust fixed. We call this modification a bug-fix change.Software evolution research leverages the history ofchanges and bug reports that accretes over time in SCMsystems and bug tracking systems to improve ourunderstanding of how a project has grown. It offers thepossibility that by examining the history of changes madeto a software project, we might better understand patternsof bug introduction, and raise developer awareness thatthey are working on risky—that is, bug-prone—sectionsof a project. For example, if we can find rules thatassociate bug-introducing changes with certain sourcecode change patterns (such as signature changes thatinvolve parameter addition [11]), it may be possible toidentify source code change patterns that are bug-prone.Due to the widespread use of bug tracking and SCMsystems, the most readily available data concerning bugsare the bug-fix changes. It is easy to mine an SCMrepository to find those changes that have repaired a bug.To do so, one examines change log messages in twoways: searching for keywords such as "Fixed" or "Bug"[12] and searching for references to bug reports like“#42233” [2, 4, 16]. With bug-fix information,researchers can determine the location of a bug. Thispermits useful analysis, such as determining per-file bugcounts, predicting bugs, finding risky parts of software [7,13, 14], or visually revealing the relationship betweenbugs and software evolution [3].The major problem with bug-fix data is that it sheds nolight on when a bug was injected into the code and whoinjected it. The person fixing a bug is often not the personwho first made the bug, and the bug-fix must, bydefinition, occur after the bug was first injected. Bug-fixdata also provides imprecise data on where a bugoccurred. Since functions and methods change theirnames over time, the fact that a fix was made to function“foo” does not mean the function still had that name whenthe bug was injected; it could have been named “bar”then. In order to deeply understand the phenomenasurrounding the introduction of bugs into code, such ascorrelated factors and causalities, we need access to theactual moment and point the bug was introduced. This istricky, and the focus of our paper.

Revision 1 (by kim, bug-introducing)Revision 2 (by ejw)Revision 3 (by kai, bug-fix)1111112122111233111kimkimkimkimkimkim1: public void bar() {2:// print report3:if (report null) {4:println(report);5:6:}ejwkimejwejwkimkimkim1: public void foo() {2:// print report3:if (report : public void foo() {2:// print out report3:if (report ! null)4:{5:println(report);6:}Figure 1. Example bug-fix and source code changes. A null-value checking bug is injected in revision 1, and fixed in revision 3.2. BackgroundPrevious work by the second author developed whatwas, prior to the current paper, the only approach foridentifying bug-introducing changes from bug-fixchanges [16]. For convenience, we call this previousapproach the SZZ algorithm, after the first letters of theauthors’ last names. To identify bug-introducing changes,SZZ first finds bug-fix changes by locating bug identifiersor relevant keywords in change log text, or following anexplicitly recorded linkage between a bug tracking systemand a specific SCM commit. SZZ then runs a diff tool todetermine what changed in the bug-fixes. The diff toolreturns a list of regions that differ in the two files; eachregion is called a hunk. It observes each hunk in the bugfix and assumes that the deleted or modified source codein each hunk is the location of a bug. Finally, SZZ tracksdown the origins of the deleted or modified source code inthe hunks using the built-in annotate feature of SCMsystems. The annotate feature computes, for each line inthe source code, the most recent revision in which the linewas changed, and the developer who made the change.The discovered origins are identified as bug-introducingchanges.Figure 1 shows an example of the history ofdevelopment of a single function over three revisions. Revision 1 shows the initial creation of function bar,and the injection of a bug into the software, the line ‘if(report null) {‘ which should be ‘! ’ instead. Theleftmost column of each revision shows the output ofthe SCM annotate command, identifying the most recentrevision and the developer who made the revision. Sincethis is the first revision, all lines were first modified atrevision 1 by the initial developer ‘kim.’ The secondcolumn of numbers in revision 1 lists line numberswithin that revision. In the second revision, two changes were made. Thefunction bar was renamed to foo, and a cosmetic changewas made where the angle bracket at the end of line 3 inrevision 1 was moved down to its own line (4) inrevision 2. As a result, the annotate output shows lines1, 3, and 4 as having been most recently modified inrevision 2 by ‘ejw.’ Revision 3 shows three changes, a modification to thecomment in line 2, deleting the blank line after theprintln, and the actual bug-fix, changing line 3 from‘ ’ to ‘! ’.21st IEEE International Conference on Automated Software Engineering (ASE'06)0-7695-2579-2/06 20.00 2006Let us consider what happens when the SZZ algorithmtries to identify the fix-inducing change associated withthe bug-fix in revision 3. SZZ starts by computing thedelta between revisions 3 and 2, yielding the lines 2, 3,and 6 (these are highlighted in the figure). SZZ then usesSCM annotate data to determine the initial origin of thesethree lines. The first problem we encounter is that SZZseeks the origin of the comment line (2) and the blank line(6); clearly neither contains the injected bug, since theselines are not executable. The next problem comes whenSZZ tries to find the origin of line 3. Since revision 2modified this line to make a cosmetic change (moving theangle bracket), the SCM annotate data indicates that thisline was most recently modified at revision 2. SZZ stopsthere, claiming that revision 2 is the bug-introducingchange. This is incorrect, since revision 1 was the point atwhich the bug was initially entered into the code. Thecosmetic change threw off the algorithm.A final problem is that, using just SCM annotateinformation, it is impossible to determine that the name ofthe function containing the bug changed its name frombar to foo. The annotate information only contains triplesof (current revision line #, most recent modificationrevision, developer who made modification). There is noinformation here that states that a given line in onerevision maps to a specific line in a previous (orfollowing) revision. It is certainly possible to computethis information—indeed, we do so in the approach weoutline in this paper—but to do so requires moreinformation than is provided solely by SCM annotatecapabilities.We can now summarize the main two limitations ofthe SZZ algorithm:SCM annotation information is insufficient: there is notenough information to identify bug-introducing changes.The previous example demonstrates how a simpleformatting change (moving the bracket) modifies SCMannotate data so an incorrect bug-introducing revision ischosen. It also highlights the need to trace the evolutionof individual lines across revisions, so function/methodcontainment can be determined.Not all modifications are fixes: Even if a file change isdefined as a bug-fix by developers, not all hunks in thechange are bug-fixes. As we saw above, changes to

comments, blank lines, and formatting are not bug-fixes,yet are flagged as such.These two limitations result in the SZZ algorithminaccurately identifying bug-introducing changes. Toaddress these issues, in this paper we present an improvedapproach for achieving accurate bug-introducing changeidentification by extending SZZ. In the new approach, weemploy annotation graphs, which contain information onthe cross-revision mappings of individual lines. This is animprovement over SCM annotate data, and permits a bugto be associated with its containing function or method.We additionally remove false bug-fixes caused bycomments, blank lines, and format changes.An important aspect of this new approach is that it isautomated. Since revision histories for large projects cancontain thousands of revisions and thousands of files,automated approaches are the only ones that scale to thissize. As an automated approach, the bug-introducingidentification algorithm we describe can be employed in awide range of software evolution analyses as an initialclean-up step to obtain high quality data sets for furtheranalysis on the causes and patterns of bug formation.To determine the accuracy of the automatic approach,we use a manual approach as well. Two human judgesmanually verified all hunks in a series of bug-fix changesto ensure the corresponding hunks are real bug-fixes.We applied our automatic and manual approach toidentify bug-introducing changes at the method level fortwo Java open source projects, Columba and Eclipse(jdt.core). We propose the following steps, as shown inFigure 2, to remove false positive and false negatives inidentifying bug-introducing changes.1.2.3.4.5.Use annotation graphs to provide more detailedannotation informationIgnore comment and blank line changesIgnore format changesIgnore outlier bug-fix revisions in which too manyfiles were changedManually verify all hunks in the bug-fix changesFigure 2. Summary of approachIn overview, applying this new approach (steps 1-5)removes 38% 51% of false positives and 14% 15% offalse negatives as compared to the original SZZalgorithm. Using only the automated algorithms (steps 14), we can remove 36 48% false positives and 14% offalse negatives. The manual fix verification does notscale, but highlights the low residual error remaining atthe end of the automated steps, since it removes only2 3% of false positives and 1% of false negatives.In the remainder of the paper, we begin by describingour experimental setup (Section 3). Following are resultsfrom our experiments (Section 4), along with discussionof the results (Section 5). Rounding off the paper, we end21st IEEE International Conference on Automated Software Engineering (ASE'06)0-7695-2579-2/06 20.00 2006with some existing applications of bug-introducingchanges (Section 6) and conclusions (Section 7).3. Experimental SetupIn this section, we describe how we extract the changehistory from an SCM system for our two projects ofinterest. We also explain the accuracy measures we usefor assessing the performance of each stage in ourimproved algorithm for identifying bug-introducingchanges.3.1. History ExtractionKenyon is a system that extracts source code changehistories from SCM systems such as CVS and Subversion[1]. Kenyon automatically checks out the source code foreach revision and extracts change information such as thechange log, author, change date, source code, changedelta, and change metadata. We used Kenyon to extractthe histories of two open source projects, as shown inTable 1.3.2. Accuracy MeasuresA bug-introducing change set is all of the changeswithin a specific range of project revisions that have beenidentified as bug-introducing. Suppose we identify a bugintroducing change set, P, using a bug-introducingidentification algorithm such as SZZ [16]. We then applythe algorithm described in this paper, and derive anotherbug-introducing change set, R, as shown in Figure 3. Thecommon elements of the two sets are P R.Figure 3. Bug-introducing change sets identified using SZZ(P) and with the new algorithm (R)Assuming R is the more accurate bug-introducingchange set, we compute false positives and false negativesfor the set P as follows:False positive (FP) P R P False negative (FN) R P R 4. Algorithms and ExperimentsIn this section, we explain our approach in detail andpresent our results from using the improved algorithm toidentify bug-introducing changes.

Table 1. Analyzed projects. # of revisions indicates the number of revisions we analyzed. # of fix revisions indicates the number ofrevisions that were identified as bug-fix revisions. Average LOC indicates the average lines of code of the projects in given periods.ProjectSoftware typePeriod# of revision # of fix revision % of fix revision Average LOCColumbaEmail Client 11/2002 06/200350014329%48,135Eclipse (jdt.core) IDE06/2001 03/2002100015816%111,0594.1. Using Annotation GraphThe SZZ algorithm for the identification of bugintroducing changes for fine-grained entities such asfunctions or methods uses SCM annotation data. In thissection, we show that this information is insufficient, andmay introduce false positives and negatives.Assume a bug-fix change occurs at revision 20, andinvolves the deletion of three lines (see Figure 4). Sincethey were deleted, the three lines are likely to contain abug. In the SZZ approach, SCM annotate data is used toobtain the revisions in which these lines were initiallyadded. The first two lines were added at revision 3, andthe third line was added at revision 9. Thus, we identifythe changes between revisions 2 and 3 and betweenrevisions 8 and 9 as bug-introducing changes at the filelevel.possible to identify the function where the bugintroducing lines were inserted.We address this problem by using annotation graphs[18], a representation for origin analysis [6, 10] at the linelevel, as shown in Figure 5. In an annotation graph, everyline of a revision is represented as a node; edges connectlines (nodes) that evolved from each other: either bymodification of the line itself or by moving the line in thefile. In Figure 5 two regions were changed betweenrevisions r1 and r2: lines 10 to 12 were inserted and lines19 to 23 were modified. The annotation graph capturesthese changes as follows: line 1 in r2 corresponds to line 1in r1 and was not changed (the edge is not marked inbold), the same holds for lines 2 to 9. Lines 10 to 12 wereinserted in r2, thus they have no origin in r1. Line 13 in r2was unchanged but has a different line number (10) in r1,this is indicated by the edge (same for 14 to 18 in r2).Lines 19 to 23 were modified in r2 and originated fromlines 16 to 20 (edges are marked in bold). Note that weapproximate origin conservatively, i.e., for modificationswe need to connect all lines affected in r1 (lines 16 to 20)with every line affected in r2 (lines 19 to 23).Figure 4. Finding bug-Introduction changes in the functionlevel.A problem occurs when we try to locate bugintroducing changes for entities such as functions ormethods. Suppose the deleted source code at revision 20was part of the 'foo()' function (see Figure 4). Note thatSCM annotation data for CVS or Subversion includesonly revision and author information. This means we onlyknow that the first two lines in Figure 4 were added atrevision 3 by 'hunkim', but we do not know the actual linenumbers of the deleted code at revision 3. In pastresearch, it was assumed that the lines at revision 3 arepart of the 'foo()' function, which is marked as a bugintroducing change, even though there is no guaranteethat the function 'foo()' existed at revision 3.Suppose at revision 3 that 'foo()' does not exist and the'bar()' function does exist, as shown in Figure 4. Oneexplanation for how this could occur is the ‘bar()’function changes its name to ‘foo()’ at some later revision.One consequence is the above assumption is wrong andthe 'foo()' function at revision 3 does not contain the bugintroducing change (false positive). We also miss a realbug-introducing change, ‘bar()’ at revision 3 (falsenegative). Since SCM annotations do not provide the linenumbers for the annotated lines at revision 3, it is not21st IEEE International Conference on Automated Software Engineering (ASE'06)0-7695-2579-2/06 20.00 2006Figure 5. An annotation graph shows line changes of a filefor three revisions [18]. A single node represents each line in arevision; edges between nodes indicate that one line originatesfrom another, either by modification or by movement.The annotation graph improves identification of bugintroducing code by providing for each line in the bug-fixchange the line number in the bug-introducing revision.This is computed by performing a backward directeddepth-first search. The resulting line number is then usedto identify the correct function name in the bug-fixrevision. For the above example, the annotation graph

would annotate the deleted lines with the line numbers inrevision 3, which are then used to identify function ‘bar’.To demonstrate the usefulness of annotation graphs forlocating bug-introducing changes, we identify bugintroducing changes at the method level for our twoprojects with and without the use of annotation graphs.The left circle in Figure 6 (a) shows the count of bugintroducing changes at method level identified withoutusing the annotation graph; the right circle shows thecount when using the annotation graphs. Without theannotation graph we have about 2% false positives and1 4% false negatives (total 3 6% errors) in identifyingbug-introducing changes. Thus, annotation graphs provideinformation for more accurate bug-introducing changeidentification at the method level.Figure 8. Identified bug-introducing change sets by ignoringcomment and blank line changes.4.3. Format ChangesSimilar to the comment and blank line changes, sourcecode format changes do not affect software behavior. Soif the source code’s format was changed during a bug-fix,as is shown in Figure 9, the source code format changeshould be ignored when we identify bug-introducingchanges.- if ( folder null ) return; if (folder null) return;Figure 9. Format change example in rToolbar.java)Figure 6. Bug-introducing change sets with and withoutannotation graph.4.2. Non Behavior ChangesSoftware bugs involve incorrect behavior of thesoftware [8], and hence are not located in the formattingof the source code, or in comments. Changes to sourcecode format or comments, or the addition/removal ofblank lines, do not affect software’s behavior. Forexample, Figure 7 shows a change in which one blankline was deleted and an ‘if condition’ was added to fix abug. If we just apply SZZ, we identify the blank line as aproblematic line and search for the origin of the blankline. We identify the revision and corresponding methodof the blank line as a bug-introducing change, which is afalse positive.To remove such false positives, we ignore blank linesand comment changes in the bug-fix hunks.public voidnotifySourceElementRequestor()Revision 3if ( a true ) return;{ Unlike the comment and blank line changes, formatchanges affect the SCM annotation information. Forexample, consider the ‘foo’ function changes shown inFigure 10. Revision 10 is a bug-fix change, involvingrepair to a faulty ‘if’. To identify the corresponding bugintroducing changes, we need to find the origin of the ‘if’at revision 10. Revision 5 only involves a formattingchange to the code. If we do not ignore source codeformat changes, when we examine the SCM annotationinformation, we identify that ‘foo’ at revision 5 is a bugintroducing change (a false positive). In fact, theproblematic line was originally created at revision 3 (thiswas missed, hence a false negative). Due to inaccurateannotation information, source code format changes leadto significant amounts of false positives and falsenegatives. Ignoring software format changes is animportant process in the accurate identification of bugintroducing changes.if (reportReferenceInfo) {notifyAllUnknownReferences();}// collect the top level ast nodesint length 0;Revision 5if (a true)return;Revision 10 (bug-fix)if (a false)return;Figure 7. Blank line deletion example in /SourceElementParser.java)Figure 10. False positive and false negative example causedby format changes.Figure 8 shows the difference in identified bugintroducing change sets by ignoring comment and blankline changes. This approach removes 14% 20% of falsepositives.Figure 11 compares the results of the SZZ approachwith the improved approach that identifies bugintroducing changes by ignoring format changes in bugfix hunks. Overall, ignoring source code format changes21st IEEE International Conference on Automated Software Engineering (ASE'06)0-7695-2579-2/06 20.00 2006

removes 18% 25% of false positives and 13% 14% offalse negatives.the changes are method name and parameter namechanges. For example, one parameter type changed from‘TypeDeclaration’ to ‘LocalTypeDeclaration’, and hencethe revision contains 7 file changes related to this change,as shown Figure 13.- public boolean visit(TypeDeclarationtypeDeclaration, BlockScope scope){ public boolean visit(LocalTypeDeclaration typeDeclaration, BlockScope scope){Figure 11. Bug-introducing change sets identified byignoring source code format changes.4.4. Remove Fix Revision OutliersIt is questionable if all the file changes in a bug-fixrevision are bug-fixes, especially if a bug-fix revisioncontains large numbers of file changes. It seems veryimprobable that in a bug-fix change containing hundredsof file changes every one would have some bearing on thefixed bug. We observed the number of files changed ineach bug-fix revision for our two projects, as shown inFigure 12. Most bug-fix revisions contain changes to justone or two files. All 50% of file change numbers perrevision (between 25% and 75% quartiles) are about 1-3.A typical approach for removing outliers from data is if adata item is 1.5 times greater than the 50% quartile, it isassumed to be an outlier. In our experiment, we adopt avery conservative approach, and use as our definition ofoutlier file change counts that are greater than 5 times the50% quartile. This ensures that any changes we note asoutliers truly have a large number of file changes.Changes identified as outliers for our two projects areshown as ‘ ’ in Figure 12.Figure 12. Box plots for the number of file changes perrevision.To ensure we were not incorrectly labeling thesechanges as outliers, we manually inspected each filechange in the outlier revisions. We observed that most of21st IEEE International Conference on Automated Software Engineering (ASE'06)0-7695-2579-2/06 20.00 2006Figure 13. Object type change example in h/matching/MatchSet.java)As shown in Figure 14, ignoring outlier revisionsremoves 7% 16% of false positives. Even though mostchanges in the outlier revisions contain method namechanges or parameter changes, it is possible that thesechanges are real bug-fixes. A determination of whetherthey are truly ignorable outliers will depend on theindividual project. As a result, ignoring outlier revisions isan optional aspect of our approach for identifying bugintroducing changes.Figure 14. Bug-introducing change sets identified byignoring outlier revisions.4.5. Manual Fix Hunk VerificationWe identify bug-fix revisions by mining change logs,and bug-fix revision data is used to identify bugintroducing changes. If a change log indicates the revisionis a bug-fix, we assume the revision is a bug-fix and allhunks in the revision are bug-fixes. Then how many ofthem are true bug-fixes? It depends on the quality of thechange log and understanding the degree of the bug-fixes.One developer may think a change is a bug-fix, whileothers think it is only a source code cleanup or a newfeature addition. To check how many bug-fix hunks aretrue bug-fixes, we manually verified all bug-fix hunks andmarked them as bug-fix or non-bug-fix. Two humanjudges, graduate students who have multiple years of Javadevelopment experience, performed the manualverification. A judge marks each bug-fix hunk of twoprojects (see Table 1) and another judge reviews themarks. Judges use a GUI-based bug-fix hunk verificationtool. The tool shows individual hunks in the bug-fixrevision. Judges read the change logs and source codecarefully and decide if the hunk is a bug-fix. The totaltime spent is shown in Table 2.

Table 2. Manual fix hunk validation time of two humanjudges.JudgesColumbaEclipseJudge 13.5 hours4 hoursJudge 24.5 hours5 hoursThe most common kind of non-bug-fix hunks in thebug-fix revision involves variable renaming, as shown inFigure 15. This kind of variable renaming does not affectsoftware behavior, but it is not easy to automaticallydetect this kind of change without performing deep staticor dynamic orce);- IResource[] remaingFiles; IResource[] remainingFiles;try {remaingFiles ((IFolder)res).members(); remainingFiles ((IFolder)res).members();}Figure 15. Variable Renaming example in ResourceElementsOperation)We identify bug-introducing changes after the manualfix hunk validation, as shown in Figure 16. Manualverification removes 4 5% false positives. Unfortunately,the manual validation requires domain knowledge anddoes not scale. However, the amount of false positivesremoved by manual verification was not substantial. Webelieve it is possible to skip the manual validation forbug-introducing change identification. We compare theoverall false positives and false negatives using theautomatic algorithms with manual validation in nextsection.Figure 17. Bug-introducing changes identified by theoriginal SZZ algorithm [16] (P) and by the approach (steps1-5) proposed in this paper (R).The manual bug-fix hunk verification gives us a goodsense of how many hunks in bug-fix revisions are truebug-fixes. There is no doubt that manual bug-fix hunkverification leads to more accurate bug-introducingchanges. Unfortunately, manual fix hunk verification doesnot scale. The reason that we examined only the first500 1000 revisions (Table 1) is the high cost of themanual verification. Figure 18 shows the false positivesand false negatives removed by applying only automaticalgorithms (steps 1-4 in Figure 2). Automatic algorithmsremove about 36 48% of false positives and 14% of falsenegatives, yielding only 1 3% difference as compared toapplying all algorithms (steps 1-5 in Figure 2). Since theerrors removed by manual verification are not significant,manual fix hunk verification can be skipped whenidentifying bug-introducing changes.Figure 18. Bug-introducing changes identified by theoriginal SZZ algorithm [16] (P) and by the automatablesteps (1-4) described in this paper (R).5. DiscussionFigure 16. Bug-introducing change sets after manual fixhunk validation.In this section, we discuss the relationship betweenidentified bug-fixes and true bug-fixes. We also discussthe relationship between identified bug-introducingchanges and true bugs.4.6. Summary5.1. Are All Identified Fixes True Fixes?We applied the steps described in Figure 2 to removefalse positive and false negative bug-introducing changes.In this section we compare the identified bug-introducingchange sets gathered using the original SZZ algorithm[16] and those from our new algorithm (steps 1-5 inFigure 2). Overall, Figure 17 shows that applying ouralgorithms removes about 38% 51% of false positivesand 14 15% of false negatives—a substantial errorreduction.21st IEEE International Conference on Automated Software Engineering (ASE'06)0-7

reports using bug tracking software (such as Bugzilla), and then record which change in the SCM system fixes a specific bug in the change tracking system. The progression of a single bug is as follows. A programmer makes a change to a software system, either to add new functionality, restructure the code, or to repair an existing bug.

Related Documents:

168 Ariados Bug / Poison Chelicerata Arachnida Aranae Salticidae, jumping spider 213 Shuckle Bug / Rock n/a n/a n/a possibly an endolithic fungi 347 Anorith Rock / Bug n/a Dinocaridida Radiodonta Anomalocaris 348 Armaldo Rock / Bug n/a Dinocaridida Radiodonta Anomalocaris 451 Skorupi Poison / Bug Chelicerata Arachnida Scorpiones generalized .

Filing a Bug Report - Existing Project File a Bug for an Existing Project - Title for bug! - Summarize - Be Descriptive - Select CSU or Component - Set Severity - Describe Module (file.c), Line of code or function - Attach supporting documents - Set Version ( tag from CMVC ) - Assigned to who? Sam Siewert 8 Be clear on bug .

BUG-O ALL TIME GIRTH WELDER Bug-O Systems offers the Automatic Girth Welder for tank fabrication applications. Unlike current girth welders on the market, the BGW (Bug-O Girth Welder) Series comes standard with a Dual Drive System. This self-propelled submerged arc welding system can reduce field storage tank welding time up to 40%. Weld

dumps, etc.) from end-users. These data are sent in the form of bug (crash) reports to the software development teams to uncover the causes of the crash and provide adequate fixes. The reports are . Regular expression for extracting stack traces from bug reports Eclipse.28 Figure 6. Regular expression for extracting stack traces from bug .

logs with the Bugzilla repository to ensure that they represent actual bug reports. i.e., check whether the extracted bug IDs exist in the corresponding Bugzilla repository. 2) Identification of supplementary bug fixes: We apply the algorithm proposed by Park et al. [12] to track supplementary

1. Use an experienced bed bug management professional or a trained 1. bed bug canine (highly recommended). 2. Bed bug identification and management training for employee supervisors and facilities management personnel. 3. Implementation of bed bug prevention methods (heat chamber, dedicated clothes dryer, and diatomaceous earth application). 4.

The Ultimate Bug Out Bag Checklist 8 The Bug Out Bag List There’s one last order of business before we begin. Here’s a little more context on what we had in mind when putting it together: Ä This bug out bag list is intended for one person. If

The American Petroleum Institute (API) 617 style compressors are typically found in refinery and petrochemical applications. GE strongly recommends the continuous collection, trending and analysis of the radial vibration, axial position, and temperature data using a machinery management system such as System 1* software. Use of these tools will enhance the ability to diagnose problems and .