Do Faster Releases Improve Software Quality? —An Empirical Case Study .

1y ago
1.11 MB
10 Pages
Last View : 16d ago
Last Download : 2m ago
Upload by : Mollie Blount

Do Faster Releases Improve Software Quality?—An Empirical Case Study of Mozilla Firefox—Foutse Khomh1 , Tejinder Dhaliwal1 , Ying Zou1 , Bram Adams21Dept. of Elec. and Comp. Engineering, Queen’s University, Kingston, Ontario, Canada2 GIGL, École Polytechnique de Montréal, Québec, Canada{foutse.khomh, tejinder.dhaliwal, ying.zou}, bram.adams@polymtl.caAbstract—Nowadays, many software companies are shiftingfrom the traditional 18-month release cycle to shorter releasecycles. For example, Google Chrome and Mozilla Firefoxrelease new versions every 6 weeks. These shorter release cyclesreduce the users’ waiting time for a new release and offerbetter marketing opportunities to companies, but it is unclearif the quality of the software product improves as well, sinceshorter release cycles result in shorter testing periods. In thispaper, we empirically study the development process of MozillaFirefox in 2010 and 2011, a period during which the projecttransitioned to a shorter release cycle. We compare crash rates,median uptime, and the proportion of post-release bugs of theversions that had a shorter release cycle with those having atraditional release cycle, to assess the relation between releasecycle length and the software quality observed by the end user.We found that (1) with shorter release cycles, users do notexperience significantly more post-release bugs and (2) bugsare fixed faster, yet (3) users experience these bugs earlierduring software execution (the program crashes earlier).Keywords-Software release; release cycle; software quality;testing; bugs.I. I NTRODUCTIONIn today’s fast changing business environment, manysoftware companies are aggressively shortening their releasecycles (i.e., the time in between successive releases) to speedup the delivery of their latest innovations to customers [1].Instead of typically working 18 months on a new releasecontaining hundreds of new features and bug fixes, companies reduce this period to, say, 3 months by limiting thescope of the release to the new features and fixing only themost crucial bugs. For example, with a rapid release model(i.e., a development model with a shorter release cycle),Mozilla could release over 1,000 improvements and performance enhancements with Firefox 5.0 in approximately3 months [2]. Under the traditional release model (i.e., adevelopment model with a long release cycle), Firefox usersused to wait for a year to get some major improvements ornew features.The concept of rapid release cycle was introduced by agilemethodologies like XP [3], which claim that shorter releasecycles offer various benefits to both companies and endusers. Companies get faster feedback about new features andbug fixes, and releases become slightly easier to plan (shortterm vs. long-term planning). Developers are not rushed tocomplete features because of an approaching release date,and can focus on quality assurance every 6 weeks insteadof every couple of months. Furthermore, the higher numberof releases provide more marketing opportunities for thecompanies. Customers benefit as well, since they have fasteraccess to new features, bug fixes and security updates.However, the claim that shorter release cycles improvethe quality of the released software has not been empiricallyvalidated yet. Baysal et al. [4] found that bugs were fixedfaster (although not statistically significantly) in versions ofFirefox using a traditional release model than in Chrome,which uses a rapid release model. Porter et al. reported thatshorter release cycles make it impossible to test all possibleconfigurations of a released product [5]. Furthermore, anecdotal evidence suggests that shorter release cycles do notallow enough time to triage bugs from previous versions,and hence hurt the developers’ chances of catching persistentbugs [6]. This is why Firefox’s current high number ofunconfirmed bugs has been attributed to the adoption of the6 week-release cycle [6]. In August 2011, Firefox had about2, 600 bugs that had not been touched since the release ofFirefox 4 five months earlier. The number of Firefox bugsthat were touched, but not triaged or worked on was evenhigher and continues to grow everyday [6].To understand whether and how transitioning to a rapidrelease model can affect the quality of a software systemas observed by users, we empirically study the historicalfield testing data of Mozilla Firefox. Firefox is a hugelypopular web browser that has shifted from the traditionaldevelopment model to a rapid release model. This allowsus to compare the quality of traditional releases to that ofrapid releases, while controlling for unpredictable factorslike development process and personnel (since those largelyremained constant). As measures of the quality of Firefox,we analyze the number of post-release bugs, the daily crashcounts and the uptime of Firefox (i.e., the time between auser starting up Firefox and experiencing a failure).

We studied the following three research questions:RQ1) Does the length of the release cycle affect thesoftware quality?There is only a negligible difference in the numberof post-release bugs when we control for the timeinterval between subsequent release dates. However,the median uptime is significantly lower for versionsdeveloped in short release cycles, i.e., failures seemto occur faster at run-time.RQ2) Does the length of the release cycle affect the fixingof bugs?Bugs are fixed significantly faster for versions developed in a rapid release model.RQ3) Does the length of the release cycle affect softwareupdates?Versions developed in a rapid release model areadopted faster by customers, i.e., the proportion ofcustomers running outdated versions that possiblycontain closed security holes is reduced.A better understanding of the impact of the release cycleon software quality will help decision makers in softwarecompanies to find the right balance between the deliveryspeed (release cycle) of new features and the quality of theirsoftware.The rest of the paper is organized as follows. Section IIprovides some background on Mozilla Firefox. Section IIIdescribes the design of our study and Section IV discussesthe results. Section V discusses threats to the validity of ourstudy. Section VI discusses the related literature on releasecycles and software quality. Finally, Section VII concludesthe paper and outlines future work.II. M OZILLA F IREFOXFirefox is an open source web browser developed by theMozilla Corporation. It is currently the third most widelyused browser, with approximately 25% usage share worldwide [7]. Firefox 1.0 was released in November 2004 andthe latest version, Firefox 9, was released on December 20,2011. Figure 1(a) shows the release dates of major Firefoxversions. Firefox followed a traditional release model untilversion 4.0 (March 2011). Afterwards, Firefox adopted arapid release model to speed up the delivery of its newfeatures. This was partly done to compete with GoogleChrome’s rapid release model [8], [9], which was erodingFirefox’s user base. The next subsections discuss the Firefoxdevelopment and quality control processes.A. Development ProcessBefore March 2011, FireFox supported multiple releasesin parallel, not only the last major release. Every version ofFireFox was followed by a series of minor versions, eachcontaining bug fixes or minor updates over the previousversion. These minor versions continued even after a newNew Feature Development5.0 NIGHTLY6.0 NIGHTLY7.0 NIGHTLY8.0 NIGHTLY5.0 AURORA6.0 AURORA7.0 AURORA5.0 BETA6.0 BETA5.0 MAIN6 WeeksFigure 2.6 Weeks6 Weeks6 WeeksDevelopment and Release Process of Mozilla Firefoxmajor release was made. Figure 1(b) shows the release datesof the minor versions of Firefox.With the advent of shorter release cycles in March 2011,new features need to be tested and delivered to users faster.To achieve this goal, Firefox changed its development process. First, versions are no longer supported in parallel, i.e.,a new version supersedes the previous ones. Second, everyFireFox version now flows through four release channels:NIGHTLY, AURORA, BETA and MAIN. The versionsmove from one channel to the next every 6 weeks [10].To date, five major versions of Firefox (i.e., 5.0, 6.0, 7.0,8.0, 9.0) have finished the new rapid release model.Figure 2 illustrates the current development and releaseprocess of Firefox. The NIGHTLY channel integrates newfeatures from the developers’ source code repositories assoon as the features are ready. The AURORA channelinherits new features from NIGHTLY at regular intervals(i.e., every 6 weeks). The features that need more work aredisabled and left for the next import cycle into AURORA.The BETA channel receives only new AURORA featuresthat are scheduled by management for the next Firefoxrelease. Finally, mature BETA features make it into MAIN.Note that at any given time (independent from the 6 weekrelease schedule) unscheduled releases may be performed toaddress critical security or stability issues.Firefox basically follows a pipelined development process. At the same time as the source code of one releaseis imported from the NIGHTLY channel into the AURORAchannels, the source code of the next release is importedinto the NIGHTLY channel. Consequently, four consecutivereleases of Firefox migrate through Mozilla’s NIGHTLY,AURORA, BETA, and MAIN channels at any given time.Figure 2 illustrates this migration.B. Quality Control ProcessOne of the main reasons for splitting Firefox’ development process into pipelined channels is to enable incremental quality control. As changes make their way throughthe release process, each channel makes the source codeavailable for testing to a ten-fold larger group of users.The estimated number of contributors and end users on the

Rapid Release CycleTraditional Release Cycle1. 5.0 7.0 9.0(a) Time Line of Major Versions of FireFox(b) Time Line of Minor Versions of FireFoxFigure 1.Timeline of FireFox versions.channels are respectively 100,000 for NIGHTLY, 1 millionfor AURORA, 10 million for BETA and 100 millions fora major Firefox version [11]. NIGHTLY reaches Firefoxdevelopers and contributors, while other channels (i.e., AURORA and BETA) recruit external users for testing. Thesource code on AURORA is tested by web developers whoare interested in the latest standards, and by Firefox add-ondevelopers who are willing to experiment with new browserAPIs. The BETA channel is tested by Firefox’s regular betatesters.Each version of Firefox in any channel embeds an automated crash reporting tool, i.e., the Mozilla Crash Reporter,to monitor the quality of Firefox across all four channels.Whenever Firefox crashes on a user’s machine, the MozillaCrash Reporter [12] collects information about the eventand sends a detailed crash report to the Socorro crashreport server. Such a crash-report includes the stack traceof the failing thread and other information about a userenvironment, such as the operating system, the version ofFirefox, the installation time, and a list of plug-ins installed.Socorro groups similar crash-reports into crash-types.These crash-types are then ranked by their frequency ofoccurrence by the Mozilla quality assurance teams. For thetop crash-types, testers file bugs in Bugzilla and link them tothe corresponding crash-type in the Socorro server. Multiplebugs can be filed for a single crash-type and multiple crashtypes can be associated with the same bug. For each crashtype, the Socorro server provides a crash-type summary, i.e.,a list of the crash-reports of the crash-type and a set of bugsthat have been filed for the crash-type.Firefox users can also submit bug reports in Bugzillamanually. A bug report contains detailed semantic information about a bug, such as the bug open date, the lastmodification date, and the bug status. The bugs are triagedby bug triaging developers and assigned for fixing. Whena developer fixes a bug, he typically submits a patch toBugzilla. Once approved, the patch code is integrated intothe source code of Firefox on the corresponding channel andmigrated through the other channels for release. Bugs thattake too long to get fixed and hence miss a scheduled releaseare picked up by the next release’s channel.III. S TUDY D ESIGNThis section presents the design of our case study, whichaims to address the following three research questions:1) Does the length of the release cycle affect the softwarequality?2) Does the length of the release cycle affect the fixingof bugs?3) Does the length of the release cycle affect softwareupdates?A. Data CollectionIn this study, we analyze all versions of Firefox that werereleased in the period from January 01, 2010 to December21, 2011. In total, we study 25 alpha versions, 25 betaversions, 29 minor versions and 7 major versions that werereleased within a period of one year before or after themove to a rapid release model. Firefox 3.6, Firefox 4 andtheir subsequent minor versions were developed followinga traditional release cycle with an average cycle time of52 weeks between the major version releases and 4 weeksbetween the minor version releases. Firefox 5, 6, 7, 8, 9and their subsequent minor versions followed a rapid releasemodel with an average release time interval of 6 weeksbetween the major releases and 2 weeks between the minorreleases. Table I shows additional descriptive statistics of thedifferent versions.

Table IS TATISTICS FROM THE ANALYZED F IREFOX VERSIONS ( THE CYCLE TIME IS GIVEN IN DAYS ).Traditionalrelease modelRapid releasemodelVersion3. 201108-11-201120-12-2011Cycle ,667,3354,653,0814,635,0644,687,901B. Data ProcessingFigure 3 shows an overview of our approach. First, wecheck the release notes of Firefox and classify the versionsbased on their release model (i.e., traditional release modeland rapid release model). Then, for each version, we extractthe necessary data from the source code repository (i.e.,Mercurial), the crash repository (i.e., Socorro), and the bugrepository (i.e., Bugzilla). Using this data, we computeseveral metrics, then statistically compare these metricsbetween the traditional release (TR) model group and therapid release (RR) model group. The remainder of thissection elaborates on each of these steps.1) Analyzing the Mozilla Wiki: For each version, weextract the starting date of the development phase and therelease date from the release notes on the Mozilla Wiki. Therelease cycle is the time period between the release dates oftwo consecutive versions. We also compute the developmenttime of the version by calculating the difference between therelease date and the starting date of the development phase.The development time is slightly longer than the releasecycle because the development of a new version is startedbefore the release of the previous one.2) Mining the Mozilla Source Code Repository: On thesource code of each downloaded version, we use the sourcecode measurement tool, SourceMonitor, to compute thenumber of Total Lines of Code and the Average Complexity.SourceMonitor1 can be applied on C , C, C], V B.N ET ,Java, Delphi, V isualBasic(V B6), and HT M L sourcecode files. Such a polyvalent tool is necessary, given thediverse set of programming languages used by Firefox.3) Mining the Mozilla Crash Repository: We downloadedthe summaries of crash reports for all versions of Firefoxthat were released between January 21, 2010 and December21, 2011. From these summaries, we extracted the date ofthe crash, the version of Firefox that was running duringthe crash, the list of related bugs, and the uptime (i.e., theduration in seconds for which Firefox was running before itcrashed).4) Analyzing the Mozilla Bug Repository: We downloaded all Firefox bug reports related to the Firefox crashes.These reports contain both pre-release and post-release bugs.We parse each of the bug reports to extract informationAlpha Versions (#)3.6a1pre–3.6b6pre (8)4.0.b1pre–4.0.b12pre (12)5.0Aurora (1)6.0Aurora (1)7.0Aurora (1)8.0Aurora (1)9.0Aurora (1)Minor Versions (#)3.6.2–3.6.24 (22)4.0.1 (1)5.0.1 (1)6.0.1, 6.0.2 (2)7.0.1 (1)8.0.1 (1)9.0.1 (1)about the bug status (e.g., UNCONFIRMED, FIXED), thebug open and modification dates, the priority of the bug andthe severity of the bug. However, we cannot directly identifythe major or minor version of Firefox for which the bug wasraised, since this is not recorded.Since the analyzed bugs are related to crashes, and crashesare linked to specific versions, we instead use this mappingto link the bugs to Firefox versions. For each bug, we checkthe crash-types for which the bug is filed. Then, we lookat the crash reports of the corresponding crash-type(s) toidentify the version that produces the crash-type, and we linkthe bug to that version. When the same crash-type containscrash reports from users on different versions, we considerthat the crash-type is generated by the oldest version.IV. C ASE S TUDY R ESULTSThis section presents and discusses the results of our threeresearch questions. For each research question, we presentthe motivation behind the question, the analysis approachand a discussion of our findings.A. RQ1: Does the length of the release cycle affect thesoftware quality?Motivation. Despite the benefits of speeding up the deliveryof new features to users, shorter release cycles could havea negative impact on the quality of software systems, sincethere is less time for testing. Many reported issues are likelyto remain unfixed until the software is released. This inturn might expose users to more post-release bugs. On theother hand, with fast release trains (e.g., every 6 weeks),developers are less pressured to rush half-baked features intothe software repository to meet the deadline. Hence, a rapidrelease model could actually introduce less bugs comparedto traditional release models. Clearing up the interactionbetween both factors is important to help decision makersin software organizations find the right balance between thespeed of delivery of new features and maintaining softwarequality.Approach. We measure the quality of a software systemusing the following three well-known metrics: 1 Versions (#)3.6b1–3.6b6 (6)4.0.b1beta–4.0.1beta (14)5.0Beta (1)6.0Beta (1)7.0Beta (1)8.0Beta (1)9.0Beta (1)Post-Release Bugs: the number of bugs reported afterthe release date of a version (lower is better).

DevelopmentTime ReleaseDatesExtract DataMozilla WikiRQ1Mozilla SourceCodeRepositoryMozilla CrashRepositoryExtract DataSource CodeSource Code3.6for VersionsExtract DataSource CodeDaily e3.6forMetricseach VersionComputeMetricsSourceCodeCrashMetrics3.6for eachVersionAnalyzeRQ2RQ3Extract DataFigure 3. Source Code3.6Bug ReportsMap Bugs toVersionsComputeMetricsOverview of our approach to study the impact of release cycle time on software quality.Median Daily Crash Count: the median of the numberof crashes per day for a particular version (lower isbetter).Median Uptime: the median across the uptime valuesof all the crashes that are reported for a version (higheris better).We answer this research question in three steps. First, wecompare the number of post-release bugs between the traditional release (i.e., TR) and rapid release (i.e., RR) groups.For each Firefox version, we consider all bugs reported afterits release date. Note that we cannot perform this comparisondirectly. Herraiz et al. [13] have shown that the number ofreported post-release bugs of a software system is related tothe number of deployments. In other words, a larger numberof deployments increases the likelihood of users reporting ahigher number of bugs. Since the number of deployments isaffected by the length of the period during which a release isused, and this usage period is directly related to the length ofthe release cycle, we need to normalize the number of postrelease bugs of each version to control for the usage time.Hence, for each version, we divide the number of reportedpost-release bugs by the length of the release cycle of theversion, and test the following null hypothesis:1H01: There is no significant difference between the numberof post-release bugs of RR versions and TR versions.Second, we analyze the distribution of the median dailycrash counts for RR and TR versions, and test the followingnull hypothesis:1: There is no significant difference between the medianH02daily crash count of RR versions and TR versions.Third, we compare the median uptime of RR versions toTR versions. We test the following null hypothesis:1H03: There is no significant difference between the medianuptime values of RR versions and TR versions.1We use the Wilcoxon rank sum test [14] to test H01,11H02 , and H03 . The Wilcoxon rank sum test is a nonparametric statistical test used for assessing whether twoindependent distributions have equally large values. Nonparametric statistical methods make no assumptions aboutthe distributions of the assessed variables.Findings. When controlled for the length of the releasecycle of a version, there is no significant differencebetween the number of post-release bugs of rapid release and traditional release versions. Figure 4 shows thedistribution of the normalized number of post-release bugsfor TR and RR versions, respectively. We can see that themedians are similar for RR and TR versions. The Wilcoxonrank sum test confirms this observation (p value 0.3),1therefore we cannot reject H01.13Number of Post Release Bugs Per DayMozilla BugRepositorySource CodeBug Metrics3.6 foreach Version20543210Traditional Release (TR)Figure 4.Rapid Release (RR)Boxplot of the number of post release bugs raised per day.There is no significant difference between the mediandaily crash count of rapid release versions and traditional release versions. The Wilcoxon rank sum test yielded1a p-value of 0.73. Again, we cannot reject H02.The median uptime is significantly lower for rapidrelease versions. Figure 5 shows the distribution of themedian uptime across TR and RR versions, respectively. Wecan observe that the median uptime is lower for RR versions.We ran the Wilcoxon rank sum test to decide if the observeddifference is statistically significant or not, and obtained a1p value of 6.11e 06. Therefore, we reject H03.In general, we can conclude that although the median ofdaily crash counts and the number of post-release bugs are

1200Median UpTime in Seconds10809608407206004803602401200Traditional Release (TR)Figure 5.Rapid Release (RR)Boxplot of the median uptime.comparable for RR versions and TR versions, the medianuptime of RR versions is lower. In other words, althoughrapid releases do not seem to impact software qualitydirectly, end users do get crashes earlier during execution1(H03), i.e., the bugs of RR versions seem to have a highershow-stopper probability than the bugs of TR versions. It isnot clear why exactly this happens, i.e., because of a qualityassurance problem or by accident (i.e., one or more showstopper bugs with a high impact).Users experience crashes earlier during the execution ofversions developed following a rapid release model.B. RQ2: Does the length of the release cycle affect the fixingof bugs?Motivation. For RQ1, we found that when one controlsfor the cycle time of versions, there is no significantdifference between the number of post-release bugs oftraditional release and rapid release versions reported perday. However, since a shorter release cycle time allows lesstime for testing and there was no substantial change in thedevelopment team of Firefox when switching to shorterrelease cycles, we might expect that the same group ofdevelopers now have less time to fix the same stream ofbugs. Hence, in this question, we investigate the proportionof bugs fixed and the speed with which the bugs are fixedin the rapid release model.Approach. For each alpha, beta, and major version, wecompute the following metrics: Fixed Bugs: the number of post-release bugs that areclosed with the status field set to FIXED (higher isbetter). Unconfirmed Bugs: the number of post-release bugswith the status field set to UNCONFIRMED (lower isbetter). Fix Time: the duration of the fixing period of the bug(i.e., the difference between the bug open time and thelast modification time). This metric is computed onlyfor bugs with the status FIXED (lower is better).We test the following null hypothesis to compare theefficiency of testing activities under traditional and rapidrelease models:2H01: There is no significant difference between the proportion of bugs fixed during the testing period of a RR versionand the proportion of bugs fixed during the testing of a TRversion.We consider the testing period of a version vi to be theperiod between the release date of the first alpha version ofvi and the release date of vi . As such, bugs opened or fixedduring this period correspond to post-release bugs of thealpha or beta versions of vi . To compute the proportion ofbugs fixed during the testing period, we divided the numberof bugs fixed in the testing period by the total number ofbugs opened during the testing period. We do not furtherdivide by the length of the testing period, since, as discussedin RQ1, both the number of fixed bugs and the number ofopened bugs depend on the length of the testing period.To assess and compare the speed at which post-releasebugs are fixed under traditional and rapid release models,we test the following null hypothesis:2H02: There is no significant difference between the distribution of Fix Time values for bugs related to TR versions andbugs related to RR versions.We also investigate a similar hypothesis for high prioritybugs only. Because high priority bugs are likely to impedeor prevent the use of core functionalities, we expect that theywill be fixed with the same timely manner under traditionaland rapid release models.For this, we classify all the bugs based on their priority,i.e., for each bug, we extract priority and severity valuesfrom the corresponding bug report. Since only 5% of Mozillabugs from our data set are filed with priority values, we relyon the severity value of a bug report if the priority valueis absent. Severity values are always available in the bugreports from our data set. In our analysis, we consider a bugto have a high priority if the bug was filed explicitly with ahigh priority value or if the bug’s severity level is either“critical”, “major”, or “blocker”. We used this heuristicbefore, with good results [15]. We can then test the followingnull hypothesis:2H03: There is no significant difference between the distribution of Fix Time values for high-priority bugs related to TRversions and high-priority bugs related to RR versions.222Similar to RQ1, hypotheses H01, H02and H03aretwo-tailed. We perform a Wilcoxon rank sum test to acceptor refute them.Findings. When following a rapid release model, theproportion of bugs fixed during the testing period islower than the proportion of bugs fixed in the testing period under the traditional release model. Figure6 shows the distribution of the proportion of bugs fixedduring the testing period of TR and RR versions. We can

observe that the proportion of bugs fixed is higher underthe traditional release model. The Wilcoxon rank sum testreturned a significant p value of 0.003. Therefore, we2reject H01. 100% 90%80%70%% of Bugs Fixed60%50%40%30%20%10%0%TraditionalRelease (TR) MainFigure 6.Rapid Release(RR) - MainTraditionalRelease (TR) BetaRapid Release(RR) - BetaBoxplot of the proportion of bugs fixed.6541591009080Bug Age in Days706050403020100Traditional Release (TR)Figure 7.Rapid Release (RR)Boxplot of Bug Fixing Time.Bugs are fixed faster under a rapid release model.Figure 7 shows the distributions of the bug fixing timefor TR and RR versions, respectively. We can see thatdevelopers take almost three times longer to fix bugs underthe traditional release cycle. The medians of bug fixingtimes under traditional release and rapid release modelsare respectively 16 days and 6 days. The result of theWilcoxon rank sum test shows that the observed difference isstatistically significant (p value 5.22e 08). Therefore,2.we reject H02When limiting our comparison to high priority bugs, weobtain again a statistically significant difference, with asmaller p value( 2.2e 16). Hence, we can also reject2H03.In order to see if the observed difference in the bugfixing time and the proportion of bugs fixed is caused bysource code size or complexity, we compute the followingsource code metrics on TR and RR versions. We computethe metrics on all files contained in a version. Total Lines of Code: the total number of lines of codeof all files contained in a version. Average Complexity: the average of the McCabe Cyclomatic Complexity of all files contained in a version.The McCabe Cyclomatic Complexity of a file is thecount of the number of linearly independent pathsthrough the source code contained in the file.Development Time: the duration in days of the development phase of a version.Rate of New Code: the total number of new lines ofcode added in the version divided by the DevelopmentTime.We found no significant difference between the complexity of traditional release and rapid release versions. Also, therate of new code in major RR versions is similar to the rateof new code in minor TR versions. This finding is consistentwith our other finding that the development time of majorRR versions is similar to the development time of minor TR.In summary, we found that although bugs are fixed fasterduring a shorter release cycle, a smaller proportion of bugsis fixed compared to the traditional release model, which allows a longer testing period. We analyzed the bugs reportedduring the testing period (i.e., excluding post-release bugs),and fo

top crash-types, testers file bugs in Bugzilla and link them to the corresponding crash-type in the Socorro server. Multiple bugs can be filed for a single crash-type and multiple crash-types can be associated with the same bug. For each crash-type, the Socorro server provides a crash-type summary, i.e.,

Related Documents:

Electromagnetic door holder/releases are designed for virtually any remote door release applications. they must be used in conjunction with closing devices. Fire/smoke barrier doors – Door releases when tripped by an alarm or smoke detector Private offices – Door releases when triggered by

Electromagnetic door holder/releases are designed for virtually any remote door release applications. they must be used in conjunction with closing devices. Fire/smoke barrier doors - Door releases when tripped by an alarm or smoke detector Private offices - Door releases when triggered by a remote switch

SAP Modules supporting the Digital Transformation in LTC Area Agile -Every month 3 Releases 2.0 2020 3 Releases SAP TM 3.0 2020 2 Releases SAP TM SAP GTS 4.0 2020 SAP TM SAP EH&S SAP EWM SAP GTS Agile- Every Month How all releases stack-up 2.0.2020 3.0.2020 4.0.2020 2020 2021

data-parallel training, PipeDream reaches a high target accuracy on multi-GPU machines up to 5.3 faster for image classification tasks, up to 3.1 faster for machine translation tasks, 4.3 faster for language modeling tasks, and 3 faster for video captioning models. PipeDream is also 2.6 - 15 faster than model parallelism, up to

play your content into their media. This guide provides practical tips for writing media releases with increased editorial value, which will improve your chances of a successful media placement. Topics Addressed in This Guide Topics That Make Good Releases Extension’s Core Policies on Media Releases

tres tipos principales de software: software de sistemas, software de aplicación y software de programación. 1.2 Tipos de software El software se clasifica en tres tipos: Software de sistema. Software de aplicación. Software de programación.

Managing Change Simpler, Faster, Better, Less Costly - ROAD MAP LEANOhio Boot Camp day one. 3 ZONES 1. Comfort 2. Learning 3. Panic . Simpler, Faster, Better, Less Costly - 1-23. LEAN GOVERNMENT LeanOhio promotes government that is : Simpler Better Faster Less Costly

SMARTER. FASTER. Accelerated Underwriting with Fast Lane Easier - Up to 2M of coverage through a less invasive process* Smarter - Streamlined underwriting process that leverages application, MIB, MVR and prescription data - no tele-med interview required Faster - Obtain underwriting decisions quicker so your cases get issued faster!