Swarm Intelligence Based Soft Computing Techniques For The .

2y ago
12 Views
2 Downloads
270.30 KB
13 Pages
Last View : 1m ago
Last Download : 3m ago
Upload by : Adalynn Cowell
Transcription

IJCSI International Journal of Computer Science Issues, Vol. 8, Issue 3, No. 2, May 2011ISSN (Online): 1694-0814www.IJCSI.org498Swarm Intelligence based Soft Computing Techniques for theSolutions to Multiobjective Optimization ProblemsHifza Afaq1 and Sanjay Saini2Department of Physics & Computer Science, Dayalbagh Educational InstituteAgra 282005, IndiaAbstractThe multi objective optimization problems can be found invarious fields such as finance, automobile design, aircraft design,path optimization etc. This paper reviews some of the existingliterature on multi objective optimization problems and some ofthe existing Swarm Intelligence (SI) based techniques to solvethese problems. The objective of this paper is to provide astarting point to the multi objective optimization problems to thereader.Keywords: Multiobjective optimization problem, EvolutionaryAlgorithm, Particle Swarm Optimization, Ant ColonyOptimization.1. IntroductionMost real life problems are multi objective in nature.These objectives are conflicting (preventing simultaneousoptimization) in general. It means that one objective isoptimized at the cost of other objective. The multiobjective optimization problems are difficult but realistic,because of their broad applicability, optimizationproblems have been studied by researchers with variousbackgrounds. This gives rise to a variety of strategies forsolving such problems. There exists a vast literature on themethods to deal with multi objective optimizationproblems. It should be noted that every approach has itspros and cons, and no single best option is available to thesolution seeker in the general case. There are two generalapproaches to multiple objective optimization problems.(i) Classical Approach (preference-based multi-objectiveoptimization) (ii) Ideal Approach (Pareto optimalapproach).A simple method to solve a multi objective optimizationproblem would be to form a composite objective functionas the weighted sum of the objectives, where a weight foran objective is proportional to the preference vectorassigned to that particular objective. This method ofscalarizing an objective vector into a single compositeobjective function converts the multi objectiveoptimization problem into a single objective optimizationproblem. When such a composite objective function isoptimized, in most cases it is possible to obtain oneparticular trade off solution. This procedure of handlingmulti objective optimization problems is simple butrelatively subjective. This procedure is preference basedmulti objective optimization [19].The second approach is to determine an entire set ofsolutions that are non-dominated with respect to eachother. This set is known as Pareto optimal set. Whilemoving from one Pareto solution to another, there isalways a certain amount of sacrifice in one or moreobjectives to achieve a certain amount of gain in theother(s). Pareto optimal solution sets are often preferred tosingle solutions because they can be practical whenconsidering real-life problems. The size of the Pareto setusually increases with the increase in the number ofobjectives [55].It is important to note that the result obtained bypreference-based strategy largely depends on the relativepreference vector used in forming the composite function.A change in this preference vector will result in a(hopefully) different trade-off solution. Preference vectorneed not result in a trade-off optimal solution to allproblems. Also, it is important to note that it is highlysubjective and not so easy to find a relative preferencevector. On the other hand, the ideal multi objectiveoptimization procedure is less subjective. The main task inthis approach is to find as much different trade-offsolutions as possible. Once a well distributed set of tradeoff solutions is found then higher level information relatedto a problem is required to choose one solution from a setof already obtained set of trade off solutions. Thus, thefundamental difference in using the problem informationin the two approaches is that, relative preference vectorneeds to be supplied without any knowledge of the

IJCSI International Journal of Computer Science Issues, Vol. 8, Issue 3, No. 2, May 2011ISSN (Online): 1694-0814www.IJCSI.orgpossible consequences in the classical approach; where inthe ideal approach, the problem information is used tochoose one solution from the obtained set of trade-offsolutions. The ideal approach, in this matter is morepractical and less subjective [19].2. Multi Objective Optimization ProblemsThe multi-objective optimization problem needs to payattention to a number of conflicting objectivessimultaneously. The goal in multi-objective optimizationis to obtain a finite number of Pareto optimal solutions,instead of a single optimal solution. For example, themulti objective problem that simultaneously minimizes(without loss of generality)1 objectives can be described asfollows:Minimize (f1(x), f2(x), ., fi(x)),text. PSO has been known to perform well for many staticproblems. However, many real-world problems aredynamic and it has been argued that EAs are potentiallywell-suited in the dynamic problems [73], [77], [83].However, PSO techniques also hold promises for dynamicproblems [5], [12]. Another approach which is gainingpopularity for multi objective Optimization is Ant ColonyOptimization technique [7], [21], [35]. Various fieldswhere ACO has been applied and known to be workingwell are classical problems such as assignment problems,sequencing [35] and scheduling problems [7], graphcoloring [39], discrete and continuous functionoptimization and path optimization problems [3], [23],[24], [31], [32], [33], [36], [72] etc. More recentapplications include cell placement problems arising incircuit design, the design of communication networks[11], [22], problems in bioinformatics [62] etc.subject to x א SWhere, S denotes the feasible solution space. In order tosolve a multi objective problem, some important concernsmust be handled carefully. One of these concerns is howto optimize every objective functions at the same time. Animportant concept tied to multi objective optimization isdomination. Deb, K. [19] defines domination byDefinition 1.Definition 1: A solution x(1) is said to dominate the othersolution x(2), if both conditions 1 and 2 are true:I.The solution x(1) is no worse than x(2) in allobjectives, or fj(x(1)) fj(x(2)) for all j 1,2, .,M.II.The solution x(1) is strictly better than x(2) in atleast one objective, or f j ( x ( 1 ) ) f j ( x ( 2 ) ) for atleast one j {1,2, .,M} [19].A non-dominated set of solution is defined by Definition2.Definition 2: Among a set of solutions P, the nondominated set of solutions P’ are those that are notdominated by any member of the set P [19].The set of non-dominated solutions is also known as thePareto optimal set of solution. This set has all the possiblegood solutions for the solution seeker.Multi objective optimization problems has been tackledthrough various approaches of which evolutionarycomputation, PSO and ACO are reviewed in the current1499A maximization problem can be changed to a minimizationproblem by using duality principle.3. Solution to Multiobjective OptimizationProblems3.1. Evolutionary AlgorithmEvolutionary algorithms (EA’s) are often well-suited foroptimization problems involving several, often conflictingobjectives, i.e., multi objective optimization problems.EAs have been used in various fields such as an auto pilotdesign [8], vehicle routing problem, travelling salesmanproblem [30]. Brockhoff reviewed theoretical aspects ofevolutionary multiobjective optimization in [9]. Potwin[71] reviewed the literature on evolutionary algorithm forvehicle routing problem which is a multi objectiveoptimization problem. Abraham and Jain state that realworld applications have several multiple conflictingobjectives generally. They defined fundamental conceptsof multiobjective optimization emphasizing the motivationand advantages of using evolutionary algorithms in [1].StartInitialize populationEvaluate fitnessSolutionfound?Apply Genetic OperatorsNoYesStopFig. 1 Flow chart for EA.

IJCSI International Journal of Computer Science Issues, Vol. 8, Issue 3, No. 2, May 2011ISSN (Online): 1694-0814www.IJCSI.orgFieldsend [29] defined multi objective problems as beingtypically complex, with both a large number of parametersto be adjusted and several objectives to be optimized.Multi objective evolutionary algorithms (MOEAs) are apopular approach to confronting multi objectiveoptimization problems by using evolutionary searchtechniques. The use of EAs as a tool of preference is dueto such problems. EAs, which can maintain a populationof solutions, are in addition able to explore several partsof the Pareto front simultaneously. Flow chart in Fig. 1displays basic EA functionality.Srinivas, N. and Deb, K. applied genetic algorithm in [77]to find the Pareto optimal set. Deb, K. describes theproblem features that may cause a multi-objective geneticalgorithm difficulty in converging to the true Paretooptimal front in [18]. He constructed multiobjective testproblems having features specific to multiobjectiveoptimization enabling researchers to test their algorithmsfor specific aspects of multi-objective optimization. Deb,K. and Agrawal, S. [20] investigated the performance ofsimple tripartite GAs on a number of simple to complextest problems from a practical standpoint. For solvingsimple problems mutation operator plays an importantrole, although crossover operator can also be applied tosolve these problems. When these two operators areapplied alone they have two different working zones forpopulation size. For complex problems involving massivemultimodality and deception, crossover operator is the keysearch operator and performs reliably with an adequatepopulation size. Based on this study, they recommendedusing of the crossover operator with an adequatepopulation size.Corne and Knowles [17] introduced Pareto Envelop basedSelection Algorithm (PESA). Zitzler, Deb and Thiele [84]compared four multi objective EA’s quantitatively. Also,they introduced an evolutionary approach to multi-criteriaoptimization, the Strength Pareto EA (SPEA) thatcombines several features of previous multi objectiveEA’s in a unique manner to produce better results. Binhand Korn [4] presented a Multiobjective evolutionstrategy (MOBES) for solving multiobjective optimizationproblems subject to linear and non-linear constraints. Thealgorithm proposed by them maintains a set of feasiblePareto optimal solution in every generation.Some researchers modified genetic algorithm to get betterresults for the problems to be tested. In this series Jian[48] employed a proposed genetic particle swarmoptimization method (MGPSO) to solve the capacitatedvehicle routing problems. He incorporated PSO with thegenetic reproduction mechanisms (crossover andmutation). This algorithm employs an integer encoding500and decoding representation. This method has beenimplemented to five well-known CVRP benchmarks.Prins developed a hybrid GA that is relatively simple buteffective to solve the vehicle routing problem.Jin, Okabe and Sendhoff [49] brought an idea thatsystematically changes the weights during evolution thatleads the population to the Pareto front. They investigatedtwo methods; one method is to assign a uniformlydistributed random weight to each individual in thepopulation in each generation. The other method is tochange the weight periodically with the process of theevolution. They found in both cases that the population isable to approach the Pareto front, although it does notkeep all the found Pareto solutions in the population.Zhang and Rockett [82] compared three evolutionarytechniques (i) The Strength Pareto EvolutionaryAlgorithm (SPEA2), (ii) The Non-dominated SortingGenetic Algorithm (NSGA-II) and (iii) The ParetoConverging Genetic Algorithm (PCGA).3.2.Particle swarm techniquesa. Particle Swarm Optimization AlgorithmHeppner and Grenander, in 1990 [38] proposed that thesmall birds fly in coordination and display strongsynchronization in turning, initiation of flight, and landingwithout any clear leader. They proposed that thesynchronization of movement may be a by product of"rules" for movement followed by each bird in the flock.Simulation of a bird swarm was used to develop a particleswarm optimization (PSO) concept [51]. PSO wasbasically developed through simulation of bird flocking intwo dimensional spaces. Searching procedures by PSOcan be described as follows:i. Each particle evaluates the function to maximize ateach point it visits in spaces.ii. Each particle remembers the best value it has foundso far (pbest) and its co-ordinates.iii. Each particle know the globally best position that onemember of the flock had found, and its value globalbest (gbest).iv. Using the co-ordinates of pbest and gbest, eachparticle calculates its new velocity as in Eq. (1)Error! Reference source not found.and the positionof each particle is updated by Eq. (2). vi ( t 1 ) vi ( t ) 1 ( pi xi ( t )) 2 ( p g xi ( t )) (1) (2)xi ( t 1 ) xi ( t ) vi ( t 1 )Here, 1 c1 R1 and 2 c2 R2 ; 0 R1 1; 0 R2 1.

IJCSI International Journal of Computer Science Issues, Vol. 8, Issue 3, No. 2, May 2011ISSN (Online): 1694-0814www.IJCSI.orgThe basic particle swarm optimization algorithm is asfollows:i.Randomly generate an initial swarmii.Repeat for each particle i doa. If f(xi) f(pi) then pi xib. pg max (pneighbours)c. update velocityd. update positione. end foriii.until termination criteria is metKennedy and Eberhart, in a later work [50] introduced theDiscrete Binary version of the PSO algorithm. In thisalgorithm the velocity of particle is described by theHamming distance between the particle at time t and t l.they defined vid for discrete space as the probability of bitxid taking the value 1. So, the Error! Reference sourcenot found. remained unaltered in the discrete versionexcept that now pid and xid belongs to {0, 1} and arediscrete. Secondly, Vid, since it is a probability, isconstrained to the interval {0.0, 1.0}. The resultingchange in position then is defined by the following rule:If (rand() S(vid))then xid 1else xid 0where, the function S(v) is a sigmoid limitingtransformation and rand() is a quasi-random numberselected from a uniform distribution in [0.0, 1.0]. Fromthe results obtained they [50] concluded that binaryparticle swarm implementation is capable of solvingvarious problems very rapidly. Also, they found that thediscrete version of the algorithm extends the capabilitiesof the continuous valued one and is able to optimize anyfunction, continuous or discrete. Higashi and Iba [40]presented a particle swarm optimization with Gaussianmutation that combines the idea of the particle swarm withconcepts from evolutionary algorithms. They tested andcompared this model with the standard PSO and standardGA. The experiments conducted on unimodal andmultimodal functions gave the better results when PSOwith Gaussian mutation is used in comparison withstandard GA and PSO. Khanesar, Teshnehlab, andShoorehdeli [53] introduced a novel binary particle swarmoptimization.b.PSO ParametersMany researchers and scientists focused their attention todifferent parameters used in PSO to produce better results.Pederson, M. E. H. explains in Good Parameters forParticle Swarm Optimization [69] that particle swarm501optimization has a number of parameters that determineits behavior and efficacy in optimizing a given problem.Shi and Eberhart [75] in late 90’s pointed out that fordifferent problems, there should be different balancesbetween the local search ability (pbest) and global searchability (gbest). Considering this they introduced aparameter called inertia weight (w), in Error! Referencesource not found. as given in Error! Reference sourcenot found. where, Error! Reference source not found.remained unchanged. vi (t 1) w* vi (t) 1 ( pi xi (t)) 2 ( pg xi (t)) (3)Inertia weight introduced by [75] could be a positiveconstant or even a positive linear or nonlinear function oftime. They performed various experiments to test theSchaffer’s f6 function, a benchmark function, andconcluded that the PSO with the inertia weight in therange [0.9, 1.2] on average will have a better performance.They also introduced the concept of time decreasinginertia weight to improve the PSO performance. Also, Shiand Eberhart [75] introduced the idea to build a fuzzysystem to tune the inertia weight on line. Clerc andKennedy [13] pointed out that the traditional versions ofthe algorithm have had some undesirable dynamicalproperties especially the particle's velocities needed to belimited in order to control their trajectories. In their studythey [13] also analyzed the particle's trajectory as it movesin discrete time, then progresses to the view of it incontinuous time. On the basis of this analysis theypresented a generalized model of the algorithm,containing a set of coefficients to control the system'sconvergence tendencies. They [13] introduced theconstriction factor that causes the convergence of theindividual trajectory in the search space, and whose valueis typically approximately 0.729. vi ( t 1 ) [ vi ( t ) U( 0, 1 ) ( pi xi ( t )) U( 0, 2 ) ( pg xi ( t ))] (4)Here, 1 2 . Clerc’s analysis worked out using acondensed form of the formula given by Eq. (5). v i ( t 1 ) [ vi ( t ) .( pm xi ( t ))](5) .p 2 .pg. Mendes, Kennedy andIn this model p m 1 i 1 2Neves [59] proposed an alternate form of calculating p m given by Eq. Error! Reference source not found.). pm w( k ) p w( k ) k Nk Nkkk ; k u 0 , MAX k N (6)N

IJCSI International Journal of Computer Science Issues, Vol. 8, Issue 3, No. 2, May 2011ISSN (Online): 1694-0814www.IJCSI.org Where is the set of neighbors of the particle and p k isthe best position found by individual k [59]. In this modelall the neighbors contribute to the velocity adjustment, theauthors called it the fully informed PSO.Adriansyah and Amin [2] proposed a variation of PSOmodel where inertia weight is sigmoid decreasing; theycalled it Sigmoid Decreasing Inertia Weight. Parsopolusand Vrahatis [67] conducted various experiments on thebenchmark problems to yield useful conclusions regardingthe effect of the parameters on the algorithm’sperformance.502recommended von Neumann sociometry that performedbetter than the standard ones on a suite of standard testproblems. Mendes, R., Kennedy, J. and Neves, J. [60]introduced new ways that an individual particle can beinfluenced by its neighbors. Different topologicalstructures are shown inc. Neighborhood topologiesKennedy and Mendes [52] describes the particle swarmalgorithm as a population of vectors whose trajectoriesoscillate around a region which is defined by eachindividual’s previous best success and the success of someother particle. Various methods have been used to identify“some other particle” to influence the individual. In theoriginal PSO algorithm the particle were influenced by itsown position and the position of the particle that hasfound best value for the function. They used gbest andpbest topologies in combination to improve theperformance of basic PSO and tested the algorithm on sixstandard benchmark functions keeping two factors infocus (i) Number of neighbors and (ii) Amount ofclustering. The results suggested that the Von-Neumanntopology (with and without self) did well where star andgbest (without self) were the worst [52]. Fig. 2 depictsdifferent topological structures. The traditional particleswarm topology gbest as described by Mendes, Kennedyand Neves [59] treats the entire population as theindividual’s neighborhood; all particles are directlyconnected to the best solution in the population. Whereas,the neighborhood of individual in the pbest typesoci

Swarm Intelligence based Soft Computing Techniques for the Solutions to Multiobjective Optimization Problems Hifza Afaq1 and Sanjay Saini2 Department of Physics & Computer Science, Dayalbagh Educational Institute Agra 282005, India Abstract The multi objective optimization problems can be found in

Related Documents:

By default, Docker Swarm is disabled, so to run Docker in swarm mode, you will need to either join an existing cluster or create a new swarm. To create a new swarm and activate it in your system, you use the swarm init command shown here: docker swarm init This will create a new single-node swarm cluster on the node you are currently working on.

Swarm intelligence Swarm intelligence collection of intelligent techniques inspired by the collective behavior of some self -organizing systems The name was coined in 1989 by Gerardo Beni and Jing Wang in the context of control systems for robots The swarm intelligence techniques use sets of agents characterized by:

Pearl Green 455 Pearl Blue 465 Orange Pearl 470 Dark Blue 475 Garnet 480 Gold 801 Soft Lilac 802 Soft Yellow 803 Soft Orange 804 Soft Garnet 805 Soft Dark Blue 806 Soft Light Blue 807 Soft Pastel Green 808 Soft Pistachio Green 809 Soft Grey 810 Soft Black 128 Dental White 400 Yellow 405 Lilac 425 Silver Grey 435 Pearl

13. Ant Colony Optimization and Swarm Intelligence (ANTS Conference) 14. Swarm, Evolutionary and Memetic Computing Conference (SEMCCO) 15. Bio-Inspired Computing: Theories and Applications (BICTA) 16. Nature and Biologically Inspired Computing (NaBIC) 17. International Conference on Soft Computing for Problem Solving (SocProS) 18.

particular and swarm intelligence algorithms in general. Application and Improvements: The common denominator constituent elements can be used to suggest subtypes for further detailed classification of the algorithms. Keywords: Swarm. intelligence, Bio-inspired techniques, Algorithm analysis, Algorithm behaviour comparison.

Swarm Intelligence Introduction Hard problems Well-defined, but computational hard problems NP hard problems (Travelling Salesman Problem) Action-response planning (Chess playing) . surrounding cells containing brood Self-organization in honey bee nest building. Swarm Intelligence Introduction

First, install Docker CE on the machine (refer to the prior sections on installing Docker CE). Initialize the swarm: Note: Set --advertise-addr to an address that other nodes in the swarm will see this node as. docker swarm init --advertise-addr advertise address We can find information about the current state of the swarm using docker info.

2 Korean Language Korean is an agglutinative language in which “words typically contain a linear sequence of MORPHS ” (Crystal, 2008). Words in Korean (eojeols), there-fore, can be formed by joining content and func-tional morphemes to indicate such meaning. These eojeols can be interpreted as the basic segmenta-tion unit and they are separated by a blank space in the Korean sentence. Let .