An Algorithm For Nonlinear Optimization Problems With Binary Variables

1y ago
16 Views
3 Downloads
984.59 KB
32 Pages
Last View : 6d ago
Last Download : 6d ago
Upload by : Luis Wallis
Transcription

Comput Optim Appl (2010) 47: 257–288DOI 10.1007/s10589-008-9218-1An algorithm for nonlinear optimization problemswith binary variablesWalter Murray · Kien-Ming NgReceived: 6 April 2007 / Revised: 22 October 2008 / Published online: 2 December 2008 Springer Science Business Media, LLC 2008Abstract One of the challenging optimization problems is determining the minimizer of a nonlinear programming problem that has binary variables. A vexing difficulty is the rate the work to solve such problems increases as the number of discretevariables increases. Any such problem with bounded discrete variables, especially binary variables, may be transformed to that of finding a global optimum of a problemin continuous variables. However, the transformed problems usually have astronomically large numbers of local minimizers, making them harder to solve than typicalglobal optimization problems. Despite this apparent disadvantage, we show that theapproach is not futile if we use smoothing techniques. The method we advocate firstconvexifies the problem and then solves a sequence of subproblems, whose solutionsform a trajectory that leads to the solution. To illustrate how well the algorithm performs we show the computational results of applying it to problems taken from theliterature and new test problems with known optimal solutions.Keywords Nonlinear integer programming · Discrete variables · SmoothingmethodsThe research of W. Murray was supported by Office of Naval Research Grant N00014-08-1-0191 andArmy Grant W911NF-07-2-0027-1.W. Murray ( )Systems Optimization Laboratory, Department of Management Science and Engineering,Stanford University, Stanford, CA 94305-4022, USAe-mail: walter@stanford.eduK.-M. NgDepartment of Industrial & Systems Engineering, National University of Singapore, Singapore,Singaporee-mail: isenkm@nus.edu.sg

258W. Murray, K.-M. Ng1 IntroductionNonlinear optimization problems whose variables can only take on integer quantitiesor discrete values abound in the real world, yet there are few successful solution methods and even one of the simplest cases with quadratic objective function and linearconstraints is NP-hard (see, e.g., [6]). We focus our interest on problems with linearequality constraints and binary variables. However, once the proposed algorithm hasbeen described it will be seen that extending the algorithm to more general cases,such as problems with linear inequality constraints, or problems with some continuous variables is trivial. Also, the binary variable requirement is not restrictive sinceany problem with bounded discrete variables can easily be transformed to a problemwith binary variables.The problem of interest can thus be stated asMinimize f (x)subject to Ax b,x {0, 1}n ,(1.1)where f : Rn R is a twice continuously differentiable function, A is an m nmatrix with rank m and b Rm is a column vector. Assuming differentiability mayseem strange when x is discrete. (Often f (x) cannot be evaluated unless every xj 0or 1.) Nonetheless for a broad class of real problems f (x) is smooth and when it isnot it is often possible to alter the original function suitably (see [26]). As notedproblems in the form of (1.1) are generally NP-hard. There is no known algorithmthat can be assured of obtaining the optimal solution to NP-hard problems for largeproblems within a reasonable amount of computational effort and time. It may evenbe difficult just to obtain a feasible solution to such problems. The challenge is todiscover algorithms that can generate a good approximate solution to this class ofproblems for an effort that increases only slowly as a function of n.If the requirement that x {0, 1}n is dropped then typically the effort to find a localminimizer of (1.1) increases only slowly with the size of the problem. It would seemattractive if (1.1) could be replaced by a problem in continuous variables. The equivalence of (1.1) and that of finding the global minimizer of a problem with continuousvariables is well known (see [14, 34]). One simple way of enforcing x {0, 1}n isto add the constraints 0 x e and x T (e x) 0, where e is the vector of unitelements. Clearly for xi to be feasible then xi 0 or 1. The difficulty is that thereis a world of difference between finding a global minimizer as opposed to a localminimizer. If there are only a few local minimizers then the task is relatively easy butin this case it is possible that every vertex of the feasible region (and there could be2n vertices) is a local minimizer. Typically the minimizer found by an algorithm tofind a local minimizer is entirely determined by the starting point chosen. Traditionalmethods for global optimization (see [25]) perform particularly poorly on problemswith numerous local minimizers.A number of approaches have been developed to handle mixed-integer nonlinearprogramming problems via a transformation of the discrete problem into a continuous problem. These approaches involve geometric, analytic and algebraic techniques,

An algorithm for nonlinear optimization problems259including the use of global or concave optimization formulations, semidefinite programming and spectral theory (see e.g., [9, 19, 20, 30, 31]). In particular, [23, 29]give an overview of many of these continuous approaches and interior-point methods. Perhaps the most obvious approach is to simply ignore the discrete requirement,solve the resulting “relaxed” problem and then discretize the solution obtained byusing an intelligent rounding scheme (see [16], Chapter 7 and [22]). Such approacheswork best when the discrete variables are in some sense artificial. For example, whenoptimizing a pipe layout it is known that pipes are available only in certain sizes. Inprinciple pipes could be of any size and the physics of the model is valid for sizesother than those available. However, even in these circumstances there may be difficult issues in rounding since it may not be easy to retain feasibility. It is not difficultto see that this approach can fail badly. Consider minimizing a strictly concave function subject to the variables being binary. The relaxed solution has a discrete solutionand obviously there is no need to round. Rather than being good news it illustratesthe solution found is entirely determined by the choice of the initial point used in thecontinuous optimization algorithm. Given that such methods find local minimizersthe probability of determining the global minimizer is extremely small since theremay be 2n local minimizers. There are other problems in which little information canbe determined from the continuous solution. For example, in determining the optimum location of substations in an electrical grid relaxing the discrete requirementsresults in a substation being placed at every node that precisely matches the load. Inthis case this is the global minimizer of the continuous problem but it gives almostno information on which node to place a substation. To be feasible it is necessary toround up to avoid violating voltage constraints. Moreover, for many nodes the loadsare identical so any scheme that rounds up some variables and rounds down others inthe hope of the solution still being feasible has trouble identifying a good choice.There are even fewer methods that are able to solve general problems with alarge number of variables. Three commercial packages that are available are DICOPT (see [18]), SBB (see [5]) and BARON (see [32]). DICOPT is a package forsolving mixed-integer nonlinear programming problems based on extensions of theouter-approximation algorithm (see [10]) for the equality relaxation strategy and thensolving a sequence of nonlinear programming and mixed-integer linear programmingproblems. SBB is based on a combination of the standard branch-and-bound methodfor the mixed-integer linear programming problems and standard nonlinear programming solvers. BARON is a global optimization package based on the branch-andreduce method (see [33]). These solver packages can in principle solve large nonlinear mixed-integer programming problems. However, it will be seen that the performance of these algorithms, especially on large problems, are sometimes poor andthere is a clear need for better algorithms for this class of problems. It may be that asingle algorithm that can find good solutions for most large problems is a step too far,but it helps to add to the arsenal of solution methods new methods whose behaviorand character differ from current methodology.Throughout this paper, we let · denote the 2-norm, i.e., the Euclidean norm, ofa vector, and let e denote the column vector of ones with dimension relevant to thecontext. If x and y are vectors in Rn , we let x y, x y to mean respectively thatxi yi , xi yi for every i {1, 2, . . . , n}, and Diag(x) to denote the diagonal matrix

260W. Murray, K.-M. Ngwith the diagonal elements made up of the respective elements xi , i {1, 2, . . . , n}.Also, the null space of a matrix A is defined as {x Rn : Ax 0}. For a real-valuedfunction f , we say that f is a C k function if it is kth-order continuously differentiable.2 An exact penalty functionRather than add nonlinear constraints we have chosen to add a penalty term xj (1 xj ),(2.1)j Jwith a penalty parameter γ 0, where J is the index set of the variables that arejudged to require forcing to a bound. We could have added the penalty x T (e x), butoften the objective is nearly concave and if there are not many equality constraintssome of the variables will be forced to be 0 or 1 without any penalty term beingrequired. The problem then becomes xj (1 xj )Minimize F (x) f (x) γsubject toAx b,0 x e.j J(2.2)In general, it is possible to show that under suitable assumptions, the penalty function introduced this way is “exact” in the sense that the following two problems havethe same global minimizers for a sufficiently large value of the penalty parameter γ :Minimize g(x)subject to Ax b,x {0, 1}n(2.3)Minimize g(x) γ x T (e x)subject to Ax b,0 x e.(2.4)andAn example of such a result (see for example [30]) is:Theorem 2.1 Let g : [0, 1]n R be a C 1 function and consider the two problems (2.3) and (2.4) with the feasible region of (2.3) being non-empty. Then thereexists M 0 such that for all γ M, problems (2.3) and (2.4) have the same globalminimizers.Note that although this is an exact penalty function it is twice continuously differentiable and the extreme ill-conditioning arising out of the need for γ normallyrequired for smooth penalty functions is avoided. Indeed, a modestly large value ofγ is usually sufficient to indicate whether a variable is converging to 0 or 1.

An algorithm for nonlinear optimization problems261Since the penalty term is concave we are highly likely to introduce local minimizers at almost all the feasible integer points, and just as significantly, saddle pointsat interior points. It is clearly critical that any method used to solve the transformedproblem be one that can be assured of not converging to a saddle point.The idea of using penalty methods for discrete optimization problems is not new(see, e.g., [3]). However, it is not sufficient to introduce only the penalty terms andhope to obtain the global minimizer by solving the resulting problem, because it ishighly likely that many undesired stationary points and local minimizers are beingintroduced in the process. This flaw also applies to the process of transforming adiscrete optimization problem into a global optimization problem simply by replacingthe discrete requirements of the variables with a nonlinear constraint. The danger ofusing the penalty function alone is illustrated by the following example:Example Consider the problem min{x 2 : x {0, 1}}. It is clear that the globalminimizer is given by x 0. Suppose the problem has been transformed tomin0 x 1 {x 2 γ x(1 x)}, where γ 0 is the penalty parameter. If γ 2 thenthe first-order KKT conditions are satisfied at 0, 1 and 2(γγ 1) . Suppose in a descentmethod the initial point is chosen in the interval [ 2(γγ 1) , 1], then the sequence generated will not converge to 0. Thus for large values of γ the probability of convergingto 0 from a random initial point is close to 0.5. A probability of 0.5 may not seemso bad but if we now consider the n-dimensional problem min{ x 2 : x {0, 1}n },then every vertex is a local minimizer of the continuous problem and the probabilityof converging from a random initial point to the correct local minimizer is not muchbetter than 1/2n .In the following we assume that the feasible space is bounded, which is the casefor the problems of interest.Definition Let x̄ be a minimizer of (2.2) and S(x̄) denote the set of feasible pointsfor which there exists an arc emanating from x̄ such that the tangent to the arc at x isZ T x f (x), where the columns of Z are a basis for the null space of the constraintsactive at x. Let the volume of S(x̄) be denoted by Sv (x̄). We may rank the minimizersin terms of the size of Sv (x̄).If x Rn is the global minimizer and V is the volume of the feasible space thenwe can expect thatlim Sv (x )/V 0.n It is this fact that makes it unlikely that simply applying a local search method to (2.2)will be successful. Indeed there is little to hope that a good local minimizer would befound. If the minima are uniformly distributed with mean M then the expected valueof f (x̄) for a random starting point is M. There is little reason to suppose M is closeto f (x ).There are alternative penalty functions. Moreover, xi (1 xi ) 0 is a complementarity condition and there is a vast literature on how to impose these conditions. Inaddition to penalty functions there are other means of trying to enforce variables to

262W. Murray, K.-M. Ngtake their discrete values. A particularly useful approach is to replace terms such ascT x that often appear in the objective, where ci 0 (xi 1 implies a capital cost),by the expression ni 1 ci (1 e βxi ) with β 10 typically. This term favors setting variables to 0 and 1 rather than 0.5 and 0.5. It may be thought that it wouldalways prefer all variables to be 0, but that is typically prevented by constraints. Alsorounding up ensures feasibility but rounding down does not.3 Smoothing methodsIn general, the presence of multiple local minima in an optimization problem is common and often makes the search for the global minimum very difficult. The fewer thelocal minima, the more likely it is that an algorithm that finds a local minimizer willfind the global minimum. Smoothing methods refer to a class of methods that replacethe original problem by either a single or a sequence of problems whose objective is“smoother”. In this context “smoother” may be interpreted as a function whose second or higher order derivatives are smaller in magnitude, with a straight line beingthe ultimate smooth function.The concept of smoothing has already been exploited in nonconvex optimizationand discussed in the context of global optimization (see, e.g., [19]). There are a number of ways to smooth a function and which is best depends to a degree on whatcharacteristics of the original function we are trying to suppress. Smoothing methodscan be categorized into two types: local or global smoothing. Local smoothing algorithms are particularly suited to problems in which noise arises during the evaluationof a function. Unfortunately, noise may produce many local minimizers, or it mayproduce a minimizer along the search direction in a linesearch method and result ina tiny step. Local smoothing has been suggested to eliminate poor minimizers thatare part of the true problem. Under such circumstances the introduction of smoothingalters the problem and may change the required global minimizer. To overcome this,a sequence of problems is solved in which the degree of smoothing is reduced to zero.While local smoothing may be useful in eliminating tiny local minimizers even whenthere are large numbers of them, they are less useful at removing significant but poorminimizers (see [25]).We would like to transform the original problem with many local minima intoone that has fewer local minima or even just one local minimum and so obtaining aglobal optimization problem that is easier to solve (see Fig. 1). Obviously we wishto eliminate poor local minima. In the subsequent discussion, we consider modifiedobjective functions of the formF (x, μ) f (x) μg(x),where f and g are real-valued C 2 functions on Rn and μ 0.3.1 Global smoothingThe basic idea of global smoothing is to add a strictly convex function to the originalobjective, i.e.,F (x, μ) f (x) μ (x),

An algorithm for nonlinear optimization problems263Fig. 1 Effect of smoothing theoriginal optimization problemwhere is a strictly convex function. If is chosen to have a Hessian that is sufficiently positive definite for all x, i.e., the eigenvalues of this Hessian are uniformlybounded away from zero, it implies that for μ large enough, F (x, μ) is strictly convex. Similar results or proofs of such an assertion can be found, for example, in[1, Lemma 3.2.1].Theorem 3.1 Suppose f : [0, 1]n R is a C 2 function and : X R is a C 2function such that the minimum eigenvalue of 2 (x) is greater than or equal to for all x X, where X [0, 1]n and 0. Then there exists M 0 such that ifμ M, then f μ is a strictly convex function on X.Consequently, for μ sufficiently large, any local minimizer of F (x, μ) is also theunique global minimizer. Typically the minimizer of (x) is known or is easy to findand hence minimizing F (x, μ) for large μ is also easy. This is important in smoothingmethods because the basic idea is to solve the problem for a decreasing sequence of μstarting with a large value and ending with one that may be close to zero. The solutionx(μk ) of minx F (x, μk ) is used as the starting point of minx F (x, μk 1 ).The idea behind global smoothing is similar to that of local smoothing, namely,the hope is that by adding μ (x) to f (x), poor local minimizers will be eliminated.There are, however, important differences between global and local smoothing. A keyone is that local smoothing does not guarantee that the function is unimodal for sufficiently large values of the smoothing parameter. For example, if the algorithm in [24]is applied to the function cos(x), a multiple of cos(x) is obtained. Thus, the numberof minimizers of the smoothed function has not been reduced. It is easy to appreciatethat the global smoothing approach is largely independent of the initial estimate of asolution, since if the initial function is unimodal, the choice of initial point is irrelevant to the minimizer found. When μ is decreased and the subsequent functions haveseveral minimizers, the old solution is used as the initial point. Consequently, whichminimizer is found is predetermined. Independence of the choice of initial point maybe viewed as both a strength and a weakness. What is happening is that any initialpoint is being replaced by a point close to the minimizer of (x). An obvious concern is that convergence will then be to the minimizer closest to the minimizer of (x). The key to the success of this approach is to choose (x) to have a minimizer

264W. Murray, K.-M. Ngthat is not close to any of the minimizers of f (x). This may not seem easy, but it ispossible for constrained problems. If it is known that the minimizers are on the edgeof the feasible region (e.g., with concave objective functions), then the “center of thefeasible region” may be viewed as being removed from all of them. An example isthe analytic center and there are certainly many other choices of generating an initialfeasible solution since the feasible region is linearly constrained.4 Logarithmic smoothingConsider the following relaxation of (1.1):Minimize f (x)subject to Ax b,0 x e.(4.1)A suitable smoothing function that we have used is given by the logarithmic smoothing function: (x) n ln xj j 1n ln(1 xj ).(4.2)j 1This function is clearly well-defined when 0 x e. If any value of xj is 0 or 1,we have (x) , which implies we can dispense with the bounds on x to get thefollowing transformed problem:Minimize f (x) μsubject toAx b,n [ln xj ln(1 xj )]j 1(4.3)where μ 0 is the smoothing parameter. When a linesearch algorithm starts with aninitial point 0 x0 e, then all iterates generated by the linesearch also satisfy thisproperty, provided care is taken in the linesearch to ensure that the maximum steptaken is within the bounds 0 x e.4.1 Properties of logarithmic smoothing functionThe function (x) is a logarithmic barrier function and is used with barrier methods(see [11]) to eliminate inequality constraints from a problem. In fact, (4.3) is sometimes known as the barrier subproblem for (4.1). Our use of this barrier function isnot to eliminate the constraints but because a barrier function appears to be an idealsmoothing function. Elimination of the inequality constraints is a useful bonus. Italso enables us to draw upon the extensive theoretical and practical results concerning barrier methods.A key property of the barrier term is that (x) is strictly convex. If μ is largeenough, the function f μ will also be strictly convex, as follows from Theorem 3.1. By Theorem 8 of [11], under the assumptions already imposed on (4.1),

An algorithm for nonlinear optimization problems265if x (μ) is a solution to (4.3), then there exists a solution x to (4.1) such thatlimμ 0 x (μ) x . The following theorem is then a consequence of the resultsof [11]:Theorem 4.1 Let x(μ, γ ) be any local minimizer ofMinimize f (x) μn [ln xj ln(1 xj )] γxj (1 xj )j Jj 1subject toThen limγ limμAx b,0 x e.0 xj (μ, γ ) 0(4.4)or 1 for j J .The general procedure of the barrier function method is to solve problem (4.3)approximately for a sequence of decreasing values of μ since [11] has shown thatthe solution to (4.3), x (μ), is a continuously differentiable curve. Note that iflimμ 0 x (μ) x {0, 1}n , then we need not solve (4.3) for μ very small becausethe rounded solution for a modestly small value of μ should be adequate.Consider now the example given in Sect. 2 to illustrate the possible failure of theuse of a penalty function. The problem is now transformed toMinimize x 2 μ log x μ log(1 x) γ x(1 x)subject to 0 x 1,(4.5)where μ 0 is the barrier parameter. If x (μ, γ ) denotes a stationary point thenx (10, 10) 0.3588 and x (1, 10) 0.1072. Thus, rounding of x (μ, γ ) in thesecases will give the global minimizer. In fact, a trajectory of x (μ, 10) can be obtainedsuch that x (μ, 10) 0 as μ 0.We have in effect replaced the hard problem of a nonlinear integer optimizationproblem by what at first appearance is an equally hard problem of finding a globalminimizer for a problem with continuous variables and a large number of local minima. The basis for our optimism that this is not the case lies in how we can utilizethe parameters μ and γ and try to obtain a global minimizer, or at least a good localminimizer of the composite objective function. Note that the term xj (1 xj ) attainsits maximum at xj 12 and that the logarithmic barrier term attains its minimum atthe same point. Consequently, at this point, the gradient is given solely by f (x). Inother words, which vertex looks most attractive from the perspective of the objectiveis the direction we tilt regardless of the value of μ or γ . Starting at a neutral pointand slowly imposing integrality is a key idea in the approach we advocate.Also, note that provided μ is sufficiently large compared to γ , the problem willhave a unique and hence global solution x (μ, γ ), which is a continuous function ofμ and γ . The hope is that the global or at least a good minimizer of (1.1) is the oneconnected by a continuous trajectory to x (μ, γ ) for μ large and γ small.Even if a global minimizer is not identified, we hope to obtain a good local minimizer and perhaps combine this approach with traditional methods.

266W. Murray, K.-M. Ng4.2 The parameters μ and γThe parameter μ is the standard barrier parameter. However, its initial value and howit is adjusted is not how this is typically done when utilizing the normal barrier function approach. For example, when a good estimate of a minimizer is known then inthe normal algorithm one tries to choose a barrier parameter that is compatible withthis point. By that we mean one that is suitably small. This is not required in the approach advocated here. Indeed it would be a poor policy. Since we are trying to findthe global minimizer we do not want the iterates to converge to a local minimizer andit is vital that the iterates move away should the initial estimate be close to a localminimizer. The initial choice of μ is made to ensure it is suitably large. By that wemean it dominates the other terms in the objective. It is not difficult to do that sincethis does not incur a significant cost if it is larger than is necessary. The overall objective is trivial to optimize when μ is large. Moreover, subsequent objectives are easyto optimize when μ is reduced at a modest rate. Consequently, the additional effortof overestimating μ is small. Underestimating μ increases the danger of convergingto a local minimizer. Unlike the regular barrier approach the parameter is reducedat a modest rate and a reasonable estimate is obtained to the solution of the currentsubproblem before the parameter is reduced. The basic idea is to stay close to the“central trajectory”.Even though there may not be a high cost to overestimating the initial value of μit is sensible to make an attempt to estimate a reasonable value especially since thisimpacts the choice of γ . Generally, the initial value of μ can be set as the maximumeigenvalue of 2 f (x) and it is easy to show that such a choice of μ would be sufficient to make the function f μ strictly convex. If a poor initial value is chosenthis can be deduced from the difference between the minimizer obtained and that ofthe minimizer of just the barrier function. This latter minimizer is easy to determineand may be used as the initial point for the first minimization. Another issue thatdiffers from the regular barrier function method is that there is no need to drive μ tonear zero before terminating. Consequently, the ill-conditioning that is typical whenusing barrier functions is avoided.Estimating a suitable initial value for γ is more challenging. What is importantis not just its value but its value relative to μ and 2 F (x) . The basic idea is thatwe start with a strongly convex function and as μ is decreased and γ is increasedthe function becomes nonconvex. However, it is important that the nonconvexity ofthe penalty term does not overwhelm the nonconvexity of F (x) in the early stages,and so the initial value of γ should be much smaller than the absolute value of theminimum eigenvalue of 2 F (x). Typically, an initial value of γ could be as smallas 1% of the absolute value of the minimum eigenvalue of 2 F (x). It helps that itis not necessary for γ . Provided a good initial value is chosen the issue ofadjusting it is not hard since it may be increased at a similar rate to that at which μis decreased. Usually, θμ , the rate of decrease of μ, is set to be a value lying in therange of [0.1, 0.9], while θγ , the rate of increase of γ , is set to be a value lying in therange of [1.1, 10] with θμ θγ 1.It is of interest to note what is the consequence of extreme strategies in the unconstrained case. If the initial values of μ and γ are small we will get a rounded solution

An algorithm for nonlinear optimization problems267of the local minimizer that a typical active-set algorithm would converge from theinitial point chosen. If the initial value of μ is large but that of γ is kept small exceptin the later stages, we will obtain the rounded solution of an attempt to find the globalminimizer of F (x). While this is better than finding the rounded solution of an arbitrary local minimizer it is not the best we can do. However, it is clear that of these twoextremes the former is the worse. What is not known is how small the initial value ofγ needs to be compared to the initial value of μ.4.3 A simple exampleTo illustrate the approach, we show how it would work on the following two-variableproblem. Letf (x1 , x2 ) (x1 1)2 (x2 1)2 0.1(x1 x2 2).Consider the following problemMinimize f (x1 , x2 )subject to xi {0, 2} for i 1, 2.(4.6)Since f is separable, solving problem (4.6) is equivalent to solving two identicalproblems with one variable and a global optimal solution of x1 x2 2 can easilybe deduced.In our approach, we first relax problem (4.6) toMinimize f (x1 , x2 )subject to 0 xi 2 for i 1, 2.(4.7)We can then transform (4.7) to the following unconstrained optimization problem byintroducing logarithmic barrier terms:Minimize F (x1 , x2 ) f (x1 , x2 ) μ[log(x1 ) log(2 x1 ) log(x2 ) log(2 x2 )],(4.8)where μ 0 and we have also omitted the implicit bound constraints on x1 and x2 .The contours of F for 4 different values of μ are illustrated in Fig. 2.Problem (4.8) can be solved for each μ by applying Newton’s method to obtain asolution to the first-order necessary conditions of optimality for (4.8), i.e., μ 2x1 1.9 xμ1 2 x1 F (x1 , x2 ) 0.μ 2x2 1.9 xμ2 2 x2Our approach involves solving (4.8) for a fixed μ μl 0 to obtain an iterate x (l) (l) (l)(x1 , x2 ), and then using this iterate as the initial iterate to solve (4.8) for μ μl 1 ,where μl μl 1 0. The initial iterate for the first μ parameter, μ0 , is set as x (0) (1, 1), which is a point with equal Euclidean distances from the extreme points of theregion X [0, 2] [0, 2]. The resultin

mizer of a nonlinear programming problem that has binary variables. A vexing diffi-culty is the rate the work to solve such problems increases as the number of discrete variables increases. Any such problem with bounded discrete variables, especially bi-nary variables, may be transformed to that of finding a global optimum of a problem

Related Documents:

Bruksanvisning för bilstereo . Bruksanvisning for bilstereo . Instrukcja obsługi samochodowego odtwarzacza stereo . Operating Instructions for Car Stereo . 610-104 . SV . Bruksanvisning i original

VII. Kernel Based Fuzzy C-Means Clustering Based on Fruit Fly Optimization Algorithm A new optimization algorithm called the Fruit Fly Optimization Algorithm or Fly Optimization Algorithm (FOA) was proposed by Pan [24]. Fruit fly Optimization algorithm simulates the foraging b

10 tips och tricks för att lyckas med ert sap-projekt 20 SAPSANYTT 2/2015 De flesta projektledare känner säkert till Cobb’s paradox. Martin Cobb verkade som CIO för sekretariatet för Treasury Board of Canada 1995 då han ställde frågan

service i Norge och Finland drivs inom ramen för ett enskilt företag (NRK. 1 och Yleisradio), fin ns det i Sverige tre: Ett för tv (Sveriges Television , SVT ), ett för radio (Sveriges Radio , SR ) och ett för utbildnings program (Sveriges Utbildningsradio, UR, vilket till följd av sin begränsade storlek inte återfinns bland de 25 största

Hotell För hotell anges de tre klasserna A/B, C och D. Det betyder att den "normala" standarden C är acceptabel men att motiven för en högre standard är starka. Ljudklass C motsvarar de tidigare normkraven för hotell, ljudklass A/B motsvarar kraven för moderna hotell med hög standard och ljudklass D kan användas vid

LÄS NOGGRANT FÖLJANDE VILLKOR FÖR APPLE DEVELOPER PROGRAM LICENCE . Apple Developer Program License Agreement Syfte Du vill använda Apple-mjukvara (enligt definitionen nedan) för att utveckla en eller flera Applikationer (enligt definitionen nedan) för Apple-märkta produkter. . Applikationer som utvecklas för iOS-produkter, Apple .

natural (either physical or bio-intelligence) phenomena's to find the solutions. Examples of the bio-intelligence inspired optimization algorithms are genetic algorithm, ant colony optimization, bee colony optimization, while the physical phenomenon inspired algorithms are water filling algorithm, particle swarm optimization,

Nonlinear Finite Element Analysis Procedures Nam-Ho Kim Goals What is a nonlinear problem? How is a nonlinear problem different from a linear one? What types of nonlinearity exist? How to understand stresses and strains How to formulate nonlinear problems How to solve nonlinear problems