1y ago

7 Views

1 Downloads

760.65 KB

11 Pages

Transcription

International Journal of the Physical Sciences Vol. 7(7), pp. 1062 - 1072, 9 February, 2012 Available online at http://www.academicjournals.org/IJPS DOI: 10.5897/IJPS11.1633 ISSN 1992 - 1950 2012 Academic Journals Full Length Research Paper A harmony search based pairwise sampling strategy for combinatorial testing Abdul Rahman A. Alsewari and Kamal Z. Zamli* School of Electrical and Electronic Engineering, Universiti Sains Malaysia. Accepted 13 January, 2012 Over the years, we become increasingly dependent on software in many activities of our lives. To ensure software quality and reliability, many combinations of possible input parameters, hardware/software environments and system conditions need to be tested and verified against for conformance. Due to resource constraints and time-to-market pressure, considering all exhaustive testing is practically impossible. In order to address this issue, a number of pairwise testing (and sampling) strategies have been developed in the literature in the past 15 years. In this paper, we propose and evaluate a novel pairwise strategy called pairwise harmony search algorithm-based strategy (PHSS). Based on the published benchmarking results, the PHSS strategy outperforms most existing strategies in terms of the generated test size in many of the parameter configurations considered. In the case where the PHSS is not the most optimal, the resulting test size is sufficiently competitive. PHSS serves as our research vehicle to investigate the effective use of harmony search (HS) algorithm for pairwise test data reduction. Key words: Pairwise testing, harmony search algorithm, software testing, combinatorial explosion problem. INTRODUCTION Computing technology has gone a long way since the first Babbage computer. Today, many chores that were once manual have been taken over by computers. Bank uses computer to maintain customers’ accounts. Electronics manufacturing use computers to test everything from basic microelectronics to circuit card assemblies. Software is what drives the computer. Our dependencies on software raise fundamental issues on quality and reliability. Here, software testing becomes immensely important. Providing confidence, identifying weaknesses, imposing an acceptable degree of quality as well as establishing the extent of which the requirements have been met are amongst the reasons for software testing. The aim of software testing is not to prove anything, rather to reduce the perceived risk of software not working to an acceptable value. In order to do so, software engineers need to consider a significantly large number of test data. Many combinations of possible input parameters, hardware/software environments and system *Corresponding author. E-mail: eekamal@eng.usm.my. conditions need to be considered resulting into combinatorial explosion problems (that is, too many tests). A number of approaches have been explored in the past to address this combinatorial explosion problem. Parallel testing can be a viable alternative to reduce the time required for performing all the tests (Cohen et al., 1997; Cohen et al., 2003; Colbourn et al., 2004). Nevertheless, as software and hardware are getting more complex than ever, parallel testing approach becomes immensely expensive due to the need to invest on more and more resources (that is, in terms of people, computing power and time). Complementary to parallel testing, systematic random testing could also be another option (Shiba et al., 2004; Cohen et al., 2003; Ahmed and Zamli, 2011). However, systematic random testing tends to dwell on unfair distribution of test cases. Obviously, there is a need for a more systematic sampling strategy in order to reduce the test data set into manageable ones. As such, much research is now focusing on pairwise (that is, 2- way) testing (Cohen et al., 1996; Cohen et al., 1997; Cohen, 2004; Yu-Wen and Aldiwan, 2000; Arshem, 2010; Bach, 2001; Pallas, 2003; Klaib et al.,

Alsewari and Zamli 2008; Keith and Doug, 2006; Williams, 2000; Lei et al., 2007; Ahmed and Zamli, 2011). In fact, many empirical evidences suggest that most failures in real software systems are often caused by unwanted 2-way interactions between system parameters (Williams, 2002; Cohen et al., 2003; Zamli et al., 2011). One of the main issues in pairwise testing is on the generation of optimal test set (that is, each pairwise interaction is covered by at most one test whenever possible) from potential large possible test parameter values. Here, searching for the optimal set of test cases are NP hard problem, that is, an increase in the parameter size causes an exponential increase in the computational time as well as in the degree of problem complexity (Shiba et al., 2004; Yuan et al., 2011; Danziger et al., 2009). As a result, many strategies (and their tool implementations) have been designed in the literature. Strategies based on artificial intelligence (AI) algorithms (for example, genetic algorithm (GA) (Shiba et al., 2004), ant colony algorithm (ACA) (Shiba et al., 2004), simulated annealing (SA) (Cohen et al., 2003) and pairwise particle swarm optimization test generation (PPSTG) (Ahmed and Zamli, 2011)) has been particularly attractive as they have been proven to give good performance in many engineering applications [for example, structure design (Lee and Geem, 2004; Kaveh and Talatahari, 2009), water network design (Geem, 2009), traffic routing (Renaud et al., 1996; Geem et al., 2005a)] etc. Nonetheless, these AI based strategies are not without limitations. Although useful, GA, ACA and SA based strategies tend to be computationally intensive; thus, they are not scalable for addressing large configurations (Cohen et al., 2003; Shiba et al., 2004; Ahmed and Zamli, 2011). While offering lightweight computation (hence, offering inherent scalability to support large configurations), PPSTG cannot sufficiently strike the balance between exploration (global search of the search place) and exploitation (excellent search around a local optimum) (Kaveh and Talatahari, 2009). Thus, PPSTG solution, in some configurations, can potentially sway away from the optimal results. Addressing the aforementioned issues and complementing existing work, we are investigating the use of harmony search (HS) algorithm for our pairwise strategy called pairwise harmony search algorithm-based strategy (PHSS). Among the advantages of HS which justifies our choice for PHSS include: 1) HS offers good balance as far diversification and intensification are concerned (Yang, 2009). 2) HS performs well as compared to other AI-based algorithms for various engineering applications (Kim et al., 2001; Geem et al., 2005a; Geem et al., 2005b; Geem and Hwangbo, 2006; Geem et al., 2006; Kim et al., 2006; Ayvaz, 2007; Geem, 2007a; Geem, 2007b; Geem and Choi, 2007; Ryu et al., 2007; Forsati et al., 2008; Mahdavi et al., 2008). 1063 3) HS is free from divergence (Geem and Kim, 2001). This paper discusses the design, implementation and assessment of PHSS. Based on the published benchmarking results, the PHSS strategy outperforms most existing strategies (for example, automatic efficient test generator (AETG) (Cohen et al., 1996; Cohen et al., 1997), an improved AETGm (Cohen, 2004), TVG (YuWen and Aldiwan, 2000; Arshem, 2010), All pairs strategy (Bach, 2001), Jenny (Pallas, 2003), G2Way (Klaib et al., 2008), PICT(Keith and Doug, 2006), TConfig (Williams, 2000), CTE XL (Lehmann and Wegener, 2000), in-parameter-order (IPO) (Lei and Tai, 1998), IPOgeneral (IPOG) (Lei et al., 2007), IRPS (Younis et al., 2008), SA (Yan and Zhang, 2008), GA (Shiba et al., 2004), ACA (Shiba et al., 2004) and PPSTG (Ahmed and Zamli, 2011)) in terms of the generated test size in many of the configurations considered. In the case where the PHSS is not the most optimal, the resulting test size is sufficiently competitive. PHSS serves as our research vehicle to investigate the effective use of HS algorithm for pairwise test data generation. RELATED WORK In general, existing interaction strategies for pairwise testing can be categorized into two categories based on the dominant approaches, that is, algebraic approaches or computational approaches (Lei et al., 2007). Algebraic approaches construct test sets using predefined rules or mathematical function (Lei et al., 2007). Thus, the computations involved in algebraic approaches are typically lightweight, and in some cases, algebraic approaches can produce the most optimal test sets. However, the applicability of algebraic approaches is often restricted to small configurations (Yan and Zhang, 2008; Lei et al., 2007). Orthogonal arrays (OA) (Hedayat et al., 1999; Hartman and Raskin, 2004), mathematics of arrays (MOA) (Mandl, 1985) and TConfig (Williams, 2002) are typical example of the strategies that are based on algebraic approach. Unlike algebraic approaches, computational approaches often rely on the generation of the all pair combinations. Based on all pair combinations, the computational approaches iteratively search the combinations space to generate the required test case until all pairs have been covered. In this manner, computational approaches can ideally be applicable even in large system configurations. However, in the case where the number of pairs to be considered is significantly large, adopting computational approaches can be expensive due to the need to consider explicit enumeration from all the combination space. Example strategies that adopt this approach includes an AETG (Cohen et al., 1996; Cohen et al., 1997), its variant mAETG (Cohen, 2004), PICT (Keith and Doug, 2006), IPO (Lei and Tai, 1998), IPOG (Lei et al., 2007), Jenny

1064 Int. J. Phys. Sci. (Pallas, 2003), all pairs (Bach, 2001), TVG (Yu-Wen and Aldiwan, 2000; Arshem, 2010), CTE XL (Lehmann and Wegener, 2000), IRPS (Younis et al., 2008), G2Way (Klaib et al., 2008), GA (Shiba et al., 2004), ACA (Shiba et al., 2004), SA (Yan and Zhang, 2008) and PPSTG (Ahmed and Zamli, 2011). AETG (Cohen et al., 1996; Cohen et al., 1997) and its variant mAETG (Cohen, 2004), employ a greedy random search algorithm based on 2-way interaction pairing in order to generate the final test suite. In this manner, the generated test case is highly non-deterministic. As for PICT (Keith and Doug, 2006), it first generates all the specified interaction before and randomly selecting their corresponding interaction combinations to form the test cases as part of the complete test suite. IPO strategy (Lei and Tai, 1998) builds a pairwise test set for the first two parameters. Then, IPO strategy extends the test set to cover the first three parameters and continues to extend the test set until it builds a pairwise test set for all the parameters. Apart from deterministic in nature, covering one parameter at a time allows the IPO strategy to achieve a lower order of complexity than AETG. Recently, IPO has been extended to address higher interaction strength in the development of IPOG (Lei et al., 2007). Jenny (Pallas, 2003) generates test data in a number of stages. Firstly, Jenny generates test data to cover all the 1-way interaction. Then, Jenny will extend the first stage test data to greedily cover the 2-way interactions. th Optionally, this process can continue until the n -way interactions as specified by the user. All pairs strategy (Bach, 2001), TVG (Yu-Wen and Aldiwan, 2000; Arshem, 2010) and CTE XL (Lehmann and Wegener, 2000) share the same property as far as producing deterministic test cases is concerned although little is known about the actual algorithms employed due to limited availability of references. A more recent strategies based on computational approaches are IRPS (Younis et al., 2008), and G2Way (Klaib et al., 2008). IRPS is deterministic in nature and focuses on efficient data structure for storing and searching pairs. In this manner, IRPS gives relatively fast execution time as compared to other strategies. G2Way adopts a backtracking algorithm to merge combinable pairs in order to generate the pairwise test suite. Unlike other strategies, G2Way also supports automated execution of the generated test suite. Concerning the adoption of AI based algorithm, much recent work has started appearing including that of GA, ACA, SA and particle swarm optimization (PSO). In GA, the test data generation process always starts with random test cases (later refers as chromosomes). These chromosomes will undergo series of mutation processes until certain stopping criteria are met. The best chromosomes will be selected as final test suite. As for ACA, the test data generation process is mimicking the colonies of ants travel from place to place (which representing the parameter) to find food (which represent the end of test case) via various route (which correspond to values for each parameter). The best route (measured based on the amount of pheromone left by colonies of ants) will represent the best value for a test case. In a nut shell, SA (Yan and Zhang, 2006) adopts a probability-based transformation equation along with a greedy binary search algorithm to iteratively find the best test case to cover all the required (pairwise) interactions from a random search space. In similar manner, PPSTG (Ahmed and Zamli, 2011), a PSO based strategy, iteratively performs local and global searches to find the candidate solution to be added to the final suite until all the pairwise interactions are covered. HS ALGORITHM HS algorithm is analogously similar to the improvisation process by a skilled musician. In the process, there will be three possible options: (1) playing any famous tune exactly from his or her memory [harmony memory (HM) with harmony memory size (HMS)]; (2) playing something similar to the aforementioned tune (thus adjusting the pitch slightly); (3) composing new or random notes. Geem et al. (2001) formalized these three options into quantitative optimization process with three corresponding components: HM, pitch adjustment and randomization. The first component for consideration is the HM. In this case, HM ensures that good harmonies are considered as elements of new solution vectors. To use this memory effectively, the HS algorithm adopts a parameter RHMCR, called HM considering (or accepting) rate with value ranging from 0 to 1. If this rate is too low, only few best harmonies are selected and they may converge too slowly. If this rate is extremely high (near 1), the pitches in the HM are mostly used and other ones are not explored well, leading into poor solutions. For these reasons, Yang (2009) suggested the value of RHMCR (0.7 to 0.95). Equation 1 shows that the will select the value of from HM. (1) Here is the existing pitch stored in the HM and is the new pitch after the pitch adjusting action. is the random number generated to consider the selection of the will be from the values stored in HM or randomly generated. The pitch adjustment is the second component that needs to be tuned and adjusted. Here, the pitch adjustment includes adjusting the pitch bandwidth brange, and pitch adjusting rate PPAR. While pitch adjustment in music changes the frequency, pitch adjustment in HS algorithm means generating a different value iteratively (Geem and Kim, 2001). In theory, pitch adjusting rate can

Alsewari and Zamli 1065 Begin Define objective function f(x), x (x ,x , ,x ) Define harmony memory accepting rate (RHMCR) Define pitch adjusting rate (RPAR) and other parameters Generate Harmony Memory with random harmonies While ( t max number of iterations ) While ( i number of variables) If (Rrandom RHMCR), Choose a value from HM for the variable i If (Prandom PPAR), Adjust the value by moving to next or previous value Else Do not adjust the value chosen from HM Else Choose a random value End while Accept and add the New Harmony (solution) to HM if better than the worst harmony End while Find the current best solution End K 1 2 n Figure 1. The harmony search algorithm. be adjusted linearly or nonlinearly, however, in practice linear adjustment is used according to Equations 2 and 3. (2) (4) Here, is a random value generated within the range of parameter values. Figure 1 shows the three components in HS algorithm. (3) Here, is a random number for considering pitch adjusting or not, is the new pitch after the pitch adjusting action or not. The pitch adjusting action produces a new pitch by adding small random amount to the existing pitch (Lee and Geem, 2005). Here, ε is a random number from uniform distribution with the range of [-1, 1]. Analogously, pitch adjustment takes similar role of the mutation operator in GA. In HS algorithm, the degree of adjustment is control through pitch-adjusting rate (PPAR). Low pitch adjusting rate with a narrow bandwidth that can slow down the convergence of HS whilst a very high pitch-adjusting rate with a wide bandwidth may cause the solution to distribute around some potential solutions (that is, as in a random search). In general, RPAR often take a range of 0.1 to 0.99 in most applications (Kim et al., 2001; Lee and Geem, 2004; Geem et al., 2005a; Geem and Choi, 2007; Mahdavi et al., 2007). The randomization is the third component which forms an essential part of HS algorithm. Here, randomization attempts to increase the variety of the solutions (that is, non-determinism). Although, the pitch adjustment increases the diversity of the solution also, it is limited to certain area and thus corresponds to a local search. The use of randomization can drive the system further to explore various diverse solutions so as to attain the global optimality and the actual probability of the pitch adjustment is determined. Equation 4 shows the probability of randomization. PHSS STRATEGY The optimization problem of concerned can be specified using Equations 5 and 6. (5) Subject to (6) is an objective function capturing the weight of the test case in terms of the number of covered pairwise interactions; x is the set of each decision variable is the set of possible range of values for each decision variable, that is, for discrete decision variables ( ); N is the number of decision parameters and is the number of possible values for the discrete variables. Addressing the aforementioned optimization problem, our PHSS strategy works as follows. Initialize parameters Firstly, the PHSS accepts the input parameters and their corresponding values. Then, the PHSS generates the interactions list IL containing all interactions tuple combinations for each pair which later forms the objective function. Apart from accepting input parameters and their

1066 Int. J. Phys. Sci. values, PHSS also needs to initialize the values for HMS, RHMCR, PPAR, and the number of improvisation. Construct harmony memory The HM can be viewed as a matrix shown in Equation 7. Initially, the first vector of the matrix is filled with the first pair that is generated in the pairwise interaction list. Then, the rest of the harmony solution vectors are randomly generated. : 1 1 2 1 𝐻𝑀𝑆 1 1 𝐻𝑀𝑆 1 1 2 2 2 1 . . . . . . . 𝐻𝑀𝑆 1 . 2 𝐻𝑀𝑆 . . . 2 2 1 1 2 1 . 1 𝐻𝑀𝑆 1 1 𝐻𝑀𝑆 1 𝐻𝑀𝑆 1 2 HMS 𝐻𝑀𝑆 : 1 2 : 𝐻𝑀𝑆 1 𝐻𝑀𝑆 (7) 1 2 HMS In Equation 7, ( x , x , , x ) and f(x ), f(x ), , f(x ) show each harmony solution vector for system parameters, and the corresponding objective function value (weight), respectively. Improvise new harmony A new harmony instrument is developed by shifting to neighboring values within a range of possible values. For example if range values is {0, 1, 2, 3, 4, 5}, and the new in the new harmony vector has the value of {3} then, this value can be moved to the neighboring value {4} in the pitch adjustment processing (Equations 2 and 3). The value of still has the same value when the . Otherwise, the value of will get a new value of 1. By default, the new value will be the next value of the original value. If the original value is the last value for the parameter, then the new solution will take the previous value. The process of updating continues to the next for the new updated solution vector values until reaching . Then, at the end of this process, a new harmony solution vector ) will be produced. solution vector is generated by the following three rules: HM consideration; pitch adjustment or totally random generation. For instance, the value of the first decision variable ( ) for the new harmony solution vector can be chosen from values stored in HM ( . The value of other variables ( ) can be chosen in the same manner. Depending on the value of (which varies between 0 and 1), it is possible to choose between the best existing vector value in HM or completely new random values (Equations 1 and 4). When , are taken from the historic values (the best harmony solution vector variables) stored in the HM, , takes a random values from the entire possible range of values (randomization process). For example, indicates that the PHSS will choose value from historically stored values in the HM with a 70% probability; otherwise, value can be selected from the entire possible range of values with a 30% probability. Therefore, it is not advisable to put the value of equal 1.0, as there are no possibilities that the solution be improved by values not stored in the HM. For improving solutions and evading local optima, another option is also introduced. This option mimics the pitch adjustment of each parameter for tuning the ensemble. For computation, the pitch adjustment Update harmony memory If the new harmony solution vector is better than the worst harmony stored in HM in term of its weight (covering the maximum tuples in the interaction list), that solution vector is included to HM whilst the worst harmony is completely removed. Otherwise, no updating is made to HM. The process of updating HM continues to the next iteration until the given number of improvisation is reached. Stopping criteria The best harmony stored in HM will be added to the final test suite. Then, the same process will continue until all interaction tuples have been covered (and the final test suite is completed). Figure 2 shows the pseudo code of our PHSS strategy. TUNING OF PHSS In order to optimize the performance of PHSS, there is a need to tune all the HS parameters including the HM size, HMCR, PAR and Improvisation, respectively. Here, we have adopted the system configuration consisting of 7 5-valued parameters. The reason for adopting this configuration stemmed from the fact that the same configuration has been adopted by other researchers for tuning purposes as in Stardom (2001) and Ahmed et al. (2012). In this case, we vary each of the PHSS parameter's values of concern in ten run. Tables 1 to 2 reports the smallest test suite and the average size for ten run for each value of the PHSS parameters of concerned.

Alsewari and Zamli 1067 70 HMS 1 70 HMS 1 66 HMS 5 66 HMS 5 62 HMS 10 62 HMS 10 58 HMS 20 58 HMS 20 54 HMS 50 54 HMS 50 50 HMS 100 50 HMS 100 46 HMS 1000 46 HMS 1000 42 38 a 34 Test suite average Test suite size Begin Define IL interaction list for each pairs Define FL final test list empty Define HM harmony memory list size HMS 1st loop until IL empty 2nd loop until HMS Generate Harmony Memory HM with random harmonies Random generate test T If T covers the maximum then add to FL and Go to next 1st loop Else Wight best weight then generate the next harmony until fill HM by HMS harmonies *****Harmony Search Update**** Define harmony memory accepting rate (RHMCR) While ( j max number of iterations ) While ( i number of variables) If (Rrandom RHMCR), Choose a value from HM for the variable i If (Prandom PPAR), Adjust the value by moving to next or previous value Else Do no adjust the value chosen from HM Else Choose a random value End while Accept and add the New Harmony (solution) into HM if better than the worst harmony and exclude the worst harmony End while while End Select the best harmony from HM and added it to FL Go 2. to The loopPHSS 1st pseudo code. Figure Print the FL End 42 38 b 34 30 30 1 5 10 50 100 500 1000 10000 Iteration 1 Figure 3. Test suite sizes and its averages for each HMS size for all improvisation. 5 10 50 100 500 1000 10000 Iteration

Int. J. Phys. Sci. 45 PAR 0.1 45 PAR 0.1 44 PAR 0.2 44 PAR 0.2 43 PAR 0.3 43 PAR 0.3 42 PAR 0.5 42 PAR 0.5 41 PAR 0.7 41 PAR 0.7 40 PAR 0.9 40 PAR 0.9 39 PAR 0.99 39 PAR 0.99 38 37 Test suite average Test suite size 1068 38 37 a 36 36 b 35 35 0.05 0.1 0.2 0.3 0.5 0.6 0.7 0.99 0.05 0.1 0.2 0.3 0.5 0.6 0.7 0.99 RHMCR RHMCR Figure 4. Test suite sizes and its averages for each combination of R HMCR and PPAR. To maximize performance, we first adopt the values of RHMCR and PPAR values to 0.95 and 0.1, respectively as published (Kim et al., 2001; Geem et al., 2005a; Geem et al., 2005b; Geem and Hwangbo, 2006; Geem et al., 2006; Kim et al., 2006; Ayvaz, 2007; Geem, 2007a; Geem, 2007b; Geem and Choi, 2007; Ryu et al., 2007; Forsati et al., 2008; Mahdavi et al., 2008). Then, we vary the HMS size (1,5,10,20,50,100, and 1000) and improvisation (1,5,10,50,100,500,1000,10000). The results are shown in Table 1. Next, we vary RHMCR (with the values of 0.05, 0.1, 0.2, 0.3.0.5, 0.6, 0.7, 0.99) and PPAR (with the values of (0.1, 0.2, 0.3, 0.5, 0.6, 0.7, 0.99), whilst fixing the value of HMS size of 100 and improvisation of 1000. It should be noted that at HMS size of 100 and improvisation of 1000, PHSS produces the most optimal average test size. The results are shown in Table 2. Based on the results shown in Table 1, we plot the test suite size and its averages against each HMS size for all improvisation as in Figure 3. Based on the results shown in Table 2, we also plot the test suite size against RHMCR with varying PPAR (See Figure 4). As highlighted earlier (and shown in Figure 3), the best value for HMS size is 100 and the best value for improvisation is 1000. From Figure 4, we can see the effect of varying the RHMCR values with respect to PPAR values. The best value of RHMCR and PPAR that yield the most optimum results are (0.7, and 0.2), respectively. For these reasons, in PHSS strategy, we have adopted these aforementioned values (HMS 100, Iteration 1000, RHMCR 0.7, PPAR 0.2). BENCHMARKING RESULTS To benchmark the PHSS strategy against existing strategies, we have adopted existing comparative experiments which are reported in Shiba et al. (2004), Klaib et al. (2008), Younis et al. (2008) and Ahmed and Zamli (2011). Here, we divide our comparison in two parts. In the first part, we take a system configuration with 10 V-valued parameters, where V are varied from 3 to 10 and we also take a system configuration with P 2-valued parameters, where P are varied from 3 to 15. Our aim here is to investigate how PSSS behaves with respect to varying V and P. In the second part, we group a number of system configurations into eleven groups in order to compare the performance of PHSS against other strategies. The configurations are shown as follows: S1: 3 3-valued parameters.

Alsewari and Zamli 1069 Table 1. Test suite sizes and its average for 7 5-valued parameters, with 0.95 RHMCR and 0.1 PPAR. HMS 1 5 10 20 50 100 1000 Size 69 50 49 46 44 43 39 1 Average 71.6 53.4 50.4 47.2 44.8 43.2 39.8 5 Size 61 51 49 46 43 42 39 Average 63.6 51.8 49.8 47.2 43.8 43 39.6 10 Size 56 49 48 46 43 42 39 Average 59 49.6 49 46.2 43.6 42.4 39.8 Improvisation/Iteration 50 100 Size Average Size Average 52 52.2 48 50.6 44 46.2 43 45.6 43 44.6 41 42.4 42 43.2 41 42 39 40 39 39.6 39 40.4 39 39.8 39 39.4 39 39.4 Size 44 42 39 40 38 37 38 500 Average 45.6 43.4 42.4 41.6 38.4 38 39.2 Size 44 40 42 39 37 37 37 1000 Average 45.8 41.8 42.6 40.8 38.2 37.8 38.4 Size 43 41 40 39 37 37 37 10000 Average 44.2 42.6 41.4 40 38.2 38.2 38.4 Table 2. Test suite size and its average for 7 5-valued parameters with HMS 100 and Improvisation 1000. PPAR 0.1 0.2 0.3 0.5 0.7 0.9 0.99 0.05 Size Average 39 40.4 39 40.4 39 40.2 39 40 40 40.4 39 40.2 40 40.8 0.1 Size Average 40 40.6 39 40 39 40.2 39 40.4 40 40.8 39 40 40 40.4 0.2 Size Average 40 40.2 39 39.2 38 39.4 39 40.8 40 40.8 39 40.8 40 40.6 S2: 4 3-valued parameters. S3: 13 3-valued parameters. S4: 10 10-valued parameters. S5: 10 15-valued parameters. S6: 20 10-valued parameters. S7: 10 5-valued parameters. S8: 1 5-valued parameter, 8 3-valued parameters, 2 2-valued parameters. S9: 1 6-valued parameter, 1 5-valued parameter, 6 4-valued parameters, 8 3-valued parameters, 3 0.3 Size Average 37 38.2 38 39 37 38.8 40 41 40 42 40 41.2 41 41.2 0.5 Size Average 37 38 37 37.4 38 38.4 39 40.4 40 41 40 41.2 41 41.2 2-valued parameters. S10: 1 7-valued parameter, 1 6-valued parameter, 1 5-valued parameter, 6 4-valued parameters, 8 3-valued parameters, 3 2-valued parameters. S11: 1 10-valued parameter, 1 9-valued parameter, 1 8-valued parameter, 1 7-valued parameter, 1 6valued parameter, 1 5-valued parameter, 1 4valued parameter, 1 3-valued parameter, 1 2valued parameter. Cells with asterisk (*) in Tables 3 to 5 show the 0.6 Size Average 37 38 37 37.2 38 38.2 40 40.2 40 41 41 41.4 41 42 0.7 Size Average 36 37.6 36 37.2 37 38.2 39 40.4 40 40.8 40 41.6 41 41.4 RHMCR 0.99 Size Average 38 38.4 38 40 39 40.8 42 42.8 40 40.8 41 41.4 41 42.4 smallest generated size of the test suite by each strategy. Entries marked with not available (NA) denote that the strategies' results are not reported in their publications. Based on the results shown in Tables 3 and 4, it is clear that PHSS performance is not affected by the increasing number of V and P. In fact, it can be seen that PHSS outperforms all other strategies in most cases considered. Specifically, in Table 3, in all cases except when V 8, PHSS

1070 Int. J. Phys. Sci. Table 3. Test suite size for a configuration with 10 V-valued parameters. V 3 4 5 6 7 8 9 10 TVG 18 33 50 72 98 124 152 189 PICT 18 31 47 66 88 112 139 170 CTE XL 18 33 50 71 97 125 161 192 TConfig 17* 31 48 64 85 114 139 170

In order to address this issue, a number of pairwise testing (and sampling) strategies have been developed in the literature in the past 15 years. In this paper, we propose and evaluate a novel pairwise strategy called pairwise harmony search algorithm-based . result, many strategies (and their tool implementations) have been designed in the .

Related Documents: