Stochastic Programming Or Dynamic Programming

7m ago
35 Views
0 Downloads
899.92 KB
80 Pages
Last View : 27d ago
Last Download : n/a
Upload by : Adalynn Cowell
Share:
Transcription

Dealing with UncertaintyStochastic ProgrammingStochastic Dynamic ProgrammingConclusion : which approach should I use ?Stochastic Programming or Dynamic ProgrammingV. Leclère2017, March 23Vincent LeclèreSP or SDPMarch 23 20171 / 52

Dealing with UncertaintyStochastic ProgrammingStochastic Dynamic ProgrammingConclusion : which approach should I use ?Presentation Outline1Dealing with UncertaintyObjective and constraintsEvaluating a solution2Stochastic ProgrammingStochastic Programming ApproachInformation FrameworkToward multistage program3Stochastic Dynamic ProgrammingDynamic Programming PrincipleCurses of DimensionalitySDDP4Conclusion : which approach should I use ?Vincent LeclèreSP or SDPMarch 23 20171 / 52

Dealing with UncertaintyStochastic ProgrammingStochastic Dynamic ProgrammingConclusion : which approach should I use ?Objective and constraintsEvaluating a solutionPresentation Outline1Dealing with UncertaintyObjective and constraintsEvaluating a solution2Stochastic ProgrammingStochastic Programming ApproachInformation FrameworkToward multistage program3Stochastic Dynamic ProgrammingDynamic Programming PrincipleCurses of DimensionalitySDDP4Conclusion : which approach should I use ?Vincent LeclèreSP or SDPMarch 23 20171 / 52

Dealing with UncertaintyStochastic ProgrammingStochastic Dynamic ProgrammingConclusion : which approach should I use ?Objective and constraintsEvaluating a solutionAn optimization problem with uncertaintyAdding uncertainty ξ in the mixmin L(u0 , ξ)u0s.t. g(u0 , ξ) 0Remarks:ξ is unknown. Two main way of modelling it:ξ Ξ with a known uncertainty set Ξ, and a pessimisticapproach. This is the robust optimization approach (RO).ξ is a random variable with known probability law. This is theStochastic Programming approach (SP).Cost is not well defined.RO : max ξ Ξ L(u, ξ).SP : E L(u, ξ) .Constraints are not well defined.RO : g(u, ξ) 0,SP : g(u, ξ) 0,Vincent Leclère ξ Ξ.P a.s.SP or SDPMarch 23 20172 / 52

Dealing with UncertaintyStochastic ProgrammingStochastic Dynamic ProgrammingConclusion : which approach should I use ?Objective and constraintsEvaluating a solutionAn optimization problem with uncertaintyAdding uncertainty ξ in the mixmin L(u0 , ξ)u0s.t. g(u0 , ξ) 0Remarks:ξ is unknown. Two main way of modelling it:ξ Ξ with a known uncertainty set Ξ, and a pessimisticapproach. This is the robust optimization approach (RO).ξ is a random variable with known probability law. This is theStochastic Programming approach (SP).Cost is not well defined.RO : max ξ Ξ L(u, ξ).SP : E L(u, ξ) .Constraints are not well defined.RO : g(u, ξ) 0,SP : g(u, ξ) 0,Vincent Leclère ξ Ξ.P a.s.SP or SDPMarch 23 20172 / 52

Dealing with UncertaintyStochastic ProgrammingStochastic Dynamic ProgrammingConclusion : which approach should I use ?Objective and constraintsEvaluating a solutionAn optimization problem with uncertaintyAdding uncertainty ξ in the mixmin L(u0 , ξ)u0s.t. g(u0 , ξ) 0Remarks:ξ is unknown. Two main way of modelling it:ξ Ξ with a known uncertainty set Ξ, and a pessimisticapproach. This is the robust optimization approach (RO).ξ is a random variable with known probability law. This is theStochastic Programming approach (SP).Cost is not well defined.RO : max ξ Ξ L(u, ξ).SP : E L(u, ξ) .Constraints are not well defined.RO : g(u, ξ) 0,SP : g(u, ξ) 0,Vincent Leclère ξ Ξ.P a.s.SP or SDPMarch 23 20172 / 52

Dealing with UncertaintyStochastic ProgrammingStochastic Dynamic ProgrammingConclusion : which approach should I use ?Objective and constraintsEvaluating a solutionAlternative cost functionsIWhen the cost L(u, ξ) is randomit mightbe natural to want to minimize its expectation E L(u, ξ) .This is even justified if the same problem is solved a largenumber of time (Law of Large Number).In some cases the expectation is not really representative ofyour risk attitude. Lets consider two examples:Are you ready to pay 1000 to have one chance over ten towin 10000 ?You need to be at the airport in 1 hour or you miss your flight,you have the choice between two mean of transport, one ofthem take surely 50’, the other take 40’ four times out of five,and 70’ one time out of five.Vincent LeclèreSP or SDPMarch 23 20173 / 52

Dealing with UncertaintyStochastic ProgrammingStochastic Dynamic ProgrammingConclusion : which approach should I use ?Objective and constraintsEvaluating a solutionAlternative cost functionsIIHere are some cost functions you might considerProbability of reaching a given level of cost : P(L(u, ξ) 0)Value-at-Risk of costs V @Rα (L(u, ξ)), where for any realvalued random variable X,noV @Rα (X) : inf P(X t) α .t RIn other word there is only a probability of α of obtaining acost worse than V @Rα (X).Average Value-at-Risk of costs AV @Rα (L(u, ξ)), which is theexpected cost over the α worst outcomes.Vincent LeclèreSP or SDPMarch 23 20174 / 52

Dealing with UncertaintyStochastic ProgrammingStochastic Dynamic ProgrammingConclusion : which approach should I use ?Objective and constraintsEvaluating a solutionAlternative constraintsIThe natural extension of the deterministic constraintg(u, ξ) 0 to g(u, ξ) 0 P as can be extremelyconservative, and even often without any admissible solutions.For example, if u is a level of production that need to begreated than the demand. In a deterministic setting therealized demand is equal to the forecast. In a stochasticsetting we add an error to the forecast. If the error isunbouded (e.g. Gaussian) no control u is admissible.Vincent LeclèreSP or SDPMarch 23 20175 / 52

Dealing with UncertaintyStochastic ProgrammingStochastic Dynamic ProgrammingConclusion : which approach should I use ?Objective and constraintsEvaluating a solutionAlternative constraintsIIHere are a few possible constraints E g(u, ξ) 0, for quality of service like constraint.P(g(u, ξ) 0) 1 α for chance constraint. Chanceconstraint is easy to present, but might lead to misconceptionas nothing is said on the event where the constraint is notsatisfied.AV @Rα (g(u, ξ)) 0Vincent LeclèreSP or SDPMarch 23 20176 / 52

Dealing with UncertaintyStochastic ProgrammingStochastic Dynamic ProgrammingConclusion : which approach should I use ?Objective and constraintsEvaluating a solutionPresentation Outline1Dealing with UncertaintyObjective and constraintsEvaluating a solution2Stochastic ProgrammingStochastic Programming ApproachInformation FrameworkToward multistage program3Stochastic Dynamic ProgrammingDynamic Programming PrincipleCurses of DimensionalitySDDP4Conclusion : which approach should I use ?Vincent LeclèreSP or SDPMarch 23 20176 / 52

Dealing with UncertaintyStochastic ProgrammingStochastic Dynamic ProgrammingConclusion : which approach should I use ?Objective and constraintsEvaluating a solutionComputing expectation Computing an expectation like E L(u, ξ) for a given u iscostly. If ξ is a r.v. with known law admitting a density, E L(u, ξ) isa (multidimensional) integral. If ξ is a r.v. with known discrete law, E L(u, ξ) is a sum overall possible realizations of ξ, which can be huge.If ξ is a r.v.that can be simulated but with unknown law, E L(u, ξ) cannot be computed exactly.Solution : use Law of Large Number (LLN) and Central LimitTheorem (CLT).Draw N ' 1000 realization of ξ.1 PNCompute the sample averageL(u, ξi ).N i 1Use CLT to give an asymptotic confidence interval of theexpectation.VincentSP or SDPMarch 23 2017ThisLeclèreis known as the Monte-Carlomethod.7 / 52

Dealing with UncertaintyStochastic ProgrammingStochastic Dynamic ProgrammingConclusion : which approach should I use ?Objective and constraintsEvaluating a solutionComputing expectation Computing an expectation like E L(u, ξ) for a given u iscostly. If ξ is a r.v. with known law admitting a density, E L(u, ξ) isa (multidimensional) integral. If ξ is a r.v. with known discrete law, E L(u, ξ) is a sum overall possible realizations of ξ, which can be huge.If ξ is a r.v.that can be simulated but with unknown law, E L(u, ξ) cannot be computed exactly.Solution : use Law of Large Number (LLN) and Central LimitTheorem (CLT).Draw N ' 1000 realization of ξ.1 PNCompute the sample averageL(u, ξi ).N i 1Use CLT to give an asymptotic confidence interval of theexpectation.VincentSP or SDPMarch 23 2017ThisLeclèreis known as the Monte-Carlomethod.7 / 52

Dealing with UncertaintyStochastic ProgrammingStochastic Dynamic ProgrammingConclusion : which approach should I use ?Objective and constraintsEvaluating a solutionConsequence : evaluating a solution is difficultIn stochastic optimization even evaluating the value of asolution can be difficult an require approximate methods.The same holds true for checking admissibility of a candidatesolution.It is even more difficult to obtain first order information(subgradient).Standard solution : sampling and solving the sampled problem(Sample Average Approximation).Vincent LeclèreSP or SDPMarch 23 20178 / 52

Dealing with UncertaintyStochastic ProgrammingStochastic Dynamic ProgrammingConclusion : which approach should I use ?Objective and constraintsEvaluating a solutionConsequence : evaluating a solution is difficultIn stochastic optimization even evaluating the value of asolution can be difficult an require approximate methods.The same holds true for checking admissibility of a candidatesolution.It is even more difficult to obtain first order information(subgradient).Standard solution : sampling and solving the sampled problem(Sample Average Approximation).Vincent LeclèreSP or SDPMarch 23 20178 / 52

Dealing with UncertaintyStochastic ProgrammingStochastic Dynamic ProgrammingConclusion : which approach should I use ?Objective and constraintsEvaluating a solutionOptimization problem and simulatorGenerally speaking stochastic optimization problem are notwell posed and often need to be approximated before solvingthem.Good practice consists in defining a simulator, i.e. arepresentation of the “real problem” on which solution can betested.Then find a candidate solution by solving an (or multiple)approximated problem.Finally evaluate the candidate solutions on the simulator. Thecomparison can be done on more than one dimension (e.g.constraints, risk.)Vincent LeclèreSP or SDPMarch 23 20179 / 52

Dealing with UncertaintyStochastic ProgrammingStochastic Dynamic ProgrammingConclusion : which approach should I use ?Stochastic Programming ApproachInformation FrameworkToward multistage programPresentation Outline1Dealing with UncertaintyObjective and constraintsEvaluating a solution2Stochastic ProgrammingStochastic Programming ApproachInformation FrameworkToward multistage program3Stochastic Dynamic ProgrammingDynamic Programming PrincipleCurses of DimensionalitySDDP4Conclusion : which approach should I use ?Vincent LeclèreSP or SDPMarch 23 20179 / 52

Dealing with UncertaintyStochastic ProgrammingStochastic Dynamic ProgrammingConclusion : which approach should I use ?Stochastic Programming ApproachInformation FrameworkToward multistage programOne-Stage ProblemAssume that Ξ as a discrete distribution1 , with P ξ ξi pi 0for i J1, nK. Then, the one-stage problem hmin E L(u0 , ξ)iu0s.t.g(u0 , ξ) 0,P a.scan be writtenminu0s.tnXpi L(u0 , ξi )i 1g(u0 , ξi ) 0, i J1, nK.1If the distribution is continuous we can sample and work on the sampleddistribution, this is called the Sample Average Approximation approach withlots of guarantee and resultsVincent LeclèreSP or SDPMarch 23 201710 / 52

Dealing with UncertaintyStochastic ProgrammingStochastic Dynamic ProgrammingConclusion : which approach should I use ?Stochastic Programming ApproachInformation FrameworkToward multistage programRecourse VariableIn most problem we can make a correction u1 once the uncertaintyis known:u0ξ1u1 .As the recourse control u1 is a function of ξ it is a randomvariable. The two-stage optimization problem then readshmin E L(u0 , ξ, u 1 )iu0s.t. g(u0 , ξ, u 1 ) 0,P a.sσ(u 1 ) σ(ξ)Vincent LeclèreSP or SDPMarch 23 201711 / 52

Dealing with UncertaintyStochastic ProgrammingStochastic Dynamic ProgrammingConclusion : which approach should I use ?Stochastic Programming ApproachInformation FrameworkToward multistage programTwo-stage ProblemThe extensive formulation ofhiminE L(u0 , ξ, u 1 )s.t.g(u0 , ξ, u 1 ) 0,u0 ,u 1P a.sisnXminu0 ,{u1i }i J1,nKs.tpi L(u0 , ξi , u1i )i 1g(u0 , ξi , u1i ) 0, i J1, nK.It is a deterministic problem that can be solved with standard toolsor specific methods.Vincent LeclèreSP or SDPMarch 23 201712 / 52

Dealing with UncertaintyStochastic ProgrammingStochastic Dynamic ProgrammingConclusion : which approach should I use ?Stochastic Programming ApproachInformation FrameworkToward multistage programRecourse assumptionsWe say that we are in a complete recourse framework, if for allu0 , and all possible outcome ξ, every control u 1 is admissible.We say that we are in a relatively complete recourseframework, if for all u0 , and all possible outcome ξ, thereexists a control u 1 that is admissible.For a lot of algorithm relatively complete recourse is acondition of convergence. It means that there is no inducedconstraints.Vincent LeclèreSP or SDPMarch 23 201713 / 52

Dealing with UncertaintyStochastic ProgrammingStochastic Dynamic ProgrammingConclusion : which approach should I use ?Stochastic Programming ApproachInformation FrameworkToward multistage programPresentation Outline1Dealing with UncertaintyObjective and constraintsEvaluating a solution2Stochastic ProgrammingStochastic Programming ApproachInformation FrameworkToward multistage program3Stochastic Dynamic ProgrammingDynamic Programming PrincipleCurses of DimensionalitySDDP4Conclusion : which approach should I use ?Vincent LeclèreSP or SDPMarch 23 201713 / 52

Dealing with UncertaintyStochastic ProgrammingStochastic Dynamic ProgrammingConclusion : which approach should I use ?Stochastic Programming ApproachInformation FrameworkToward multistage programTwo-stage framework : three information modelsConsider the problem min E L(u 0 , ξ, u 1 u 0 ,u 1Open-Loop approach : u 0 and u 1 are deterministic. In thiscase both controls are choosen without any knowledge of thealea ξ. The set of control is small, and an optimal control canbe found through specific method (e.g. Stochastic Gradient).Two-Stage approach : u 0 is deterministic and u 1 ismeasurable with respect to ξ. This is the problem tackled byStochastic Programming method.Anticipative approach : u 0 and u 1 are measurable withrespect to ξ. This approach consists in solving onedeterministic problem per possible outcome of the alea, andtaking the expectation of the value of this problems.Vincent LeclèreSP or SDPMarch 23 201714 / 52

Dealing with UncertaintyStochastic ProgrammingStochastic Dynamic ProgrammingConclusion : which approach should I use ?Stochastic Programming ApproachInformation FrameworkToward multistage programComparing the modelsBy simple comparison of constraints we haveV anticipative V 2 stage V OL .can be approximated through specific methods (e.g.Stochastic Gradient).V 2 stage is obtained through Stochastic Programming specificmethods. There are two main approaches:V OLLagrangian decomposition methods (like Progressive-Hedgingalgorithm).Benders decomposition methods (like L-shaped ornested-decomposition methods).V anticipative is difficult to compute exactly but can beestimated through Monte-Carlo approach by drawing areasonable number of realizations of ξ, solving thedeterministic problem for each realization ξi and taking themeans of the value of the deterministic problem.Vincent LeclèreSP or SDPMarch 23 201715 / 52

Dealing with UncertaintyStochastic ProgrammingStochastic Dynamic ProgrammingConclusion : which approach should I use ?Stochastic Programming ApproachInformation FrameworkToward multistage programPresentation Outline1Dealing with UncertaintyObjective and constraintsEvaluating a solution2Stochastic ProgrammingStochastic Programming ApproachInformation FrameworkToward multistage program3Stochastic Dynamic ProgrammingDynamic Programming PrincipleCurses of DimensionalitySDDP4Conclusion : which approach should I use ?Vincent LeclèreSP or SDPMarch 23 201715 / 52

Dealing with UncertaintyStochastic ProgrammingStochastic Dynamic ProgrammingConclusion : which approach should I use ?Stochastic Programming ApproachInformation FrameworkToward multistage programManaging a damA dam can be seen as abattery, with random inflow offree electricity to be used atthe best time.Vincent LeclèreSP or SDPMarch 23 201716 / 52

Dealing with UncertaintyStochastic ProgrammingStochastic Dynamic ProgrammingConclusion : which approach should I use ?Stochastic Programming ApproachInformation FrameworkToward multistage programWhere do we come from: two-stage programmingu1,8(w18 , π8 )We take decisions in two stagesu1,7u0 ; W 1 ; u 1 ,(w17 , π7 )(w16 , π6 )u0(w15 , π5 )(w14 , π4 )(w13 , π3 )(w12 , π2 )(w11 , π1 )u1,6u1,5u1,4On a tree, it meanssolving the extensive formulation:u1,3u1,2u1,1Vincent Leclèrewith u 1 : recourse decision .minu0 ,u1,sX πs cs , u0 ps , u1,s .s SWe have as many u1,s as scenarios!SP or SDPMarch 23 201717 / 52

Dealing with UncertaintyStochastic ProgrammingStochastic Dynamic ProgrammingConclusion : which approach should I use ?Stochastic Programming ApproachInformation FrameworkToward multistage programExtending two-stage to multistage programmingu14(w14 , π4 )u0u13(w13 , π3 )(w12 , π2 ) 2u1u24,4 min E j(u, W )u24,3uu24,2u24,1u23,4U (u 0 , · · · , U T )u23,3W (w 1 , · · · , W T )u23,2u23,1u22,4u22,3u22,2(w11 , π1 )u11u22,1u21,4u21,3u21,2We take decisions in T stagesW 0 ; u0 ; W 1 ; u1 ; · · · ; w T ; uT .u21,1Vincent LeclèreSP or SDPMarch 23 201718 / 52

Dealing with UncertaintyStochastic ProgrammingStochastic Dynamic ProgrammingConclusion : which approach should I use ?Stochastic Programming ApproachInformation FrameworkToward multistage programIntroducing the non-anticipativity constraintWe do not know what holds behind the door.Non-anticipativityAt time t, decisions are taken sequentially, only knowing the pastrealizations of the perturbations.Mathematically, this is equivalent to say that at time t,the decision u t is1 a function of past noisesu t πt (W 0 , · · · , W t ) ,2taken knowing the available information,σ(u t ) σ(W 0 , · · · , w t ) .Vincent LeclèreSP or SDPMarch 23 201719 / 52

Dealing with UncertaintyStochastic ProgrammingStochastic Dynamic ProgrammingConclusion : which approach should I use ?Stochastic Programming ApproachInformation FrameworkToward multistage programMultistage extensive formulation approachAssume that wt Rnw can take nw valuesand that Ut (x ) can take nu values.u14(w14 , π4 )u0u13(w13 , π3 )(w12 , π2 ) 2,3u22,2(w11 , π1 )u11u22,1u21,4u21,3u21,2u21,1Vincent LeclèreThen, considering the extensive formulationapproach, we haveT scenarios.nwT 1 1)/(n 1) nodes in the tree.(nwwNumber of variables in the optimizationproblem is roughlyT 1 1)/(n 1) n nT .nu (nwwu wThe complexity grows exponentially with thenumber of stage. :-(A way to overcome this issue is to compressinformation!SP or SDPMarch 23 201720 / 52

Dealing with UncertaintyStochastic ProgrammingStochastic Dynamic ProgrammingConclusion : which approach should I use ?Stochastic Programming ApproachInformation FrameworkToward multistage programMultistage extensive formulation approachAssume that wt Rnw can take nw valuesand that Ut (x ) can take nu values.u14(w14 , π4 )u0u13(w13 , π3 )(w12 , π2 ) 2,3u22,2(w11 , π1 )u11u22,1u21,4u21,3u21,2u21,1Vincent LeclèreThen, considering the extensive formulationapproach, we haveT scenarios.nwT 1 1)/(n 1) nodes in the tree.(nwwNumber of variables in the optimizationproblem is roughlyT 1 1)/(n 1) n nT .nu (nwwu wThe complexit

Stochastic Programming Stochastic Dynamic Programming Conclusion : which approach should I use ? Objective and constraints Evaluating a solution Presentation Outline 1 Dealing with Uncertainty Objective and constraints Evaluating a solution 2 Stochastic Programming Stochastic Programming Approach Information Framework Toward multistage program