4.1 Interval Scheduling Chapter 4 - Princeton University

2y ago
25 Views
3 Downloads
3.20 MB
13 Pages
Last View : 17d ago
Last Download : 3m ago
Upload by : Jamie Paz
Transcription

4.1 Interval SchedulingChapter 4GreedyAlgorithmsSlides by Kevin Wayne.Copyright 2005 Pearson-Addison Wesley.All rights reserved.1Interval SchedulingInterval Scheduling: Greedy AlgorithmsInterval scheduling.Job j starts at sj and finishes at fj.Two jobs compatible if they don't overlap.Goal: find maximum subset of mutually compatible jobs.Greedy template. Consider jobs in some natural order.Take each job provided it's compatible with the ones already taken. a bc d[Earliest start time] Consider jobs in ascending order of sj.[Earliest finish time] Consider jobs in ascending order of fj.[Shortest interval] Consider jobs in ascending order of fj - sj.[Fewest conflicts] For each job j, count the number ofconflicting jobs cj. Schedule in ascending order of cj.efgh01234567891011Time34

Interval Scheduling: Greedy AlgorithmsInterval Scheduling: Greedy AlgorithmGreedy template. Consider jobs in some natural order.Take each job provided it's compatible with the ones already taken.Greedy algorithm. Consider jobs in increasing order of finish time.Take each job provided it's compatible with the ones already taken.Sort jobs by finish times so that f1 f2 . fn.counterexample for earliest start timeset of jobs selectedA φfor j 1 to n {if (job j compatible with A)A A {j}}return Acounterexample for shortest intervalcounterexample for fewest conflictsImplementation. O(n log n).Remember job j* that was added last to A.Job j is compatible with A if sj fj*. 56Interval Scheduling: AnalysisInterval Scheduling: AnalysisTheorem. Greedy algorithm is optimal.Theorem. Greedy algorithm is optimal.Pf. (by contradiction)Assume greedy is not optimal, and let's see what happens.Let i1, i2, . ik denote set of jobs selected by greedy.Let j1, j2, . jm denote set of jobs in the optimal solution withi1 j1, i2 j2, ., ir jr for the largest possible value of r.Pf. (by contradiction)Assume greedy is not optimal, and let's see what happens.Let i1, i2, . ik denote set of jobs selected by greedy.Let j1, j2, . jm denote set of jobs in the optimal solution withi1 j1, i2 j2, ., ir jr for the largest possible value of r. job ir 1 finishes before jr 1Greedy:i1i2irOPT:j1j2jrjob ir 1 finishes before jr 1ir 1jr 1.why not replace job jr 1with job ir 1?Greedy:i1i2irir 1OPT:j1j2jrir 1.solution still feasible and optimal,but contradicts maximality of r.78

Interval Partitioning4.1 Interval PartitioningInterval partitioning.Lecture j starts at sj and finishes at fj.Goal: find minimum number of classrooms to schedule all lecturesso that no two occur at the same time in the same room. Ex: This schedule uses 4 classrooms to schedule 10 :30i22:3033:3044:30Time10Interval PartitioningInterval Partitioning: Lower Bound on Optimal SolutionInterval partitioning.Lecture j starts at sj and finishes at fj.Goal: find minimum number of classrooms to schedule all lecturesso that no two occur at the same time in the same room.Def. The depth of a set of open intervals is the maximum number thatcontain any given time.Ex: This schedule uses only 3.Ex: Depth of schedule below 3 schedule below is optimal. Key observation. Number of classrooms needed depth.a, b, c all contain 9:30Q. Does there always exist a schedule equal to depth of 111:301212:30h11:3022:3033:3044:30Time12

Interval Partitioning: Greedy AlgorithmInterval Partitioning: Greedy AnalysisGreedy algorithm. Consider lectures in increasing order of start time:assign lecture to any compatible classroom.Observation. Greedy algorithm never schedules two incompatiblelectures in the same classroom.Theorem. Greedy algorithm is optimal.Pf.Let d number of classrooms that the greedy algorithm allocates.Classroom d is opened because we needed to schedule a job, say j,that is incompatible with all d-1 other classrooms.These d jobs each end after sj.Since we sorted by start time, all these incompatibilities are causedby lectures that start no later than sj.Thus, we have d lectures overlapping at time sj ε.Key observation all schedules use d classrooms. Sort intervals by starting time so that s1 s2 . sn.d 0number of allocated classrooms for j 1 to nif (lecturescheduleelseallocatescheduled d }{j is compatible with some classroom k)lecture j in classroom k a new classroom d 1lecture j in classroom d 11 Implementation. O(n log n).For each classroom k, maintain the finish time of the last job added.Keep the classrooms in a priority queue. 1314Scheduling to Minimizing Lateness4.2 Scheduling to Minimize LatenessMinimizing lateness problem.Single resource processes one job at a time.Job j requires tj units of processing time and is due at time dj.If j starts at time sj, it finishes at time fj sj tj.Lateness: lj max { 0, fj - dj }.Goal: schedule all jobs to minimize maximum lateness L max lj. Ex:1234tj3dj656214328991415lateness 2d3 901d2 82d6 1534d1 6567d5 1489max lateness 6lateness 010d4 9111213141516

Minimizing Lateness: Greedy AlgorithmsMinimizing Lateness: Greedy AlgorithmsGreedy template. Consider jobs in some order.Greedy template. Consider jobs in some order.[Shortest processing time first] Consider jobs in ascending orderof processing time tj. [Shortest processing time first] Consider jobs in ascending orderof processing time tj.[Earliest deadline first] Consider jobs in ascending order ofdeadline dj. [Smallest slack] Consider jobs in ascending order of slack dj - tj. 12tj110dj10010counterexample[Smallest slack] Consider jobs in ascending order of slack dj - tj.12tj110dj210counterexample1718Minimizing Lateness: Greedy AlgorithmMinimizing Lateness: No Idle TimeGreedy algorithm. Earliest deadline first.Observation. There exists an optimal schedule with no idle time.d 40Sort n jobs by deadline so that d1 d2 dnt 0for j 1 to nAssign job j to interval [t, t tj]sj t, fj t tjt t tjoutput intervals [sj, fj]1d 62323d 4014d 125656d 647891011891011d 127Observation. The greedy schedule has no idle time.max lateness 1d1 6012d2 834d3 956d4 978d5 149101112d6 151314151920

Minimizing Lateness: InversionsMinimizing Lateness: InversionsDef. Given a schedule S, an inversion is a pair of jobs i and j such that:i j but j scheduled before i.inversionbefore swapjDef. Given a schedule S, an inversion is a pair of jobs i and j such that:i j but j scheduled before i.inversionfiijbefore swapiiafter swap[ as before, we assume jobs are numbered so that d1 d2 dn ]fijf'jObservation. Greedy schedule has no inversions.Claim. Swapping two consecutive, inverted jobs reduces the number ofinversions by one and does not increase the max lateness.Observation. If a schedule (with no idle time) has an inversion, it hasone with a pair of inverted jobs scheduled consecutively.Pf. Let l be the lateness before the swap, and let l ' be it afterwards.l 'k lk for all k i, jl"j f j" # d j(definition)l 'i li If job j is late:fi # d j fi # d i li( j finishes at time fi )(i j)(definition)2122!Minimizing Lateness: Analysis of Greedy AlgorithmGreedy Analysis StrategiesTheorem. Greedy schedule S is optimal.Pf. Define S* to be an optimal schedule that has the fewest number ofinversions, and let's see what happens.Can assume S* has no idle time.If S* has no inversions, then S S*.If S* has an inversion, let i-j be an adjacent inversion.– swapping i and j does not increase the maximum lateness andstrictly decreases the number of inversions– this contradicts definition of S* Greedy algorithm stays ahead. Show that after each step of the greedyalgorithm, its solution is at least as good as any other algorithm's.Structural. Discover a simple "structural" bound asserting that everypossible solution must have a certain value. Then show that youralgorithm always achieves this bound. Exchange argument. Gradually transform any solution to the one foundby the greedy algorithm without hurting its quality.Other greedy algorithms. Kruskal, Prim, Dijkstra, Huffman, 2324

Optimal Offline Caching4.3 Optimal CachingCaching.Cache with capacity to store k items.Sequence of m item requests d1, d2, , dm.Cache hit: item already in cache when requested.Cache miss: item not already in cache when requested: must bringrequested item into cache, and evict some existing item, if full. Goal. Eviction schedule that minimizes number of cache misses.Ex: k 2, initial cache ab,requests: a, b, c, b, c, a, a, b.Optimal eviction schedule: 2 cache misses.red cache missaabbabccbbcbccbaabaabbabrequestscache26Optimal Offline Caching: Farthest-In-FutureReduced Eviction SchedulesFarthest-in-future. Evict item in the cache that is not requested untilfarthest in the future.bdeIntuition. Can transform an unreduced schedule into a reduced onewith no more cache misses.current cache:afuture queries:g a b c e d a b b a c d e a f a d e f g h .cache misscDef. A reduced schedule is a schedule that only inserts an item intothe cache in a step in which that item is requested.feject this oneTheorem. [Bellady, 1960s] FF is optimal eviction schedule.Pf. Algorithm and theorem are intuitive; proof is bbadbcacbcacbaabcaacbaabcaacban unreduced schedule27a reduced schedule28

Reduced Eviction SchedulesFarthest-In-Future: AnalysisClaim. Given any unreduced schedule S, can transform it into a reducedschedule S' with no more cache misses.doesn't enter cache at requestedPf. (by induction on number of unreduced items) timeSuppose S brings d into the cache at time t, without a request.Let c be the item S evicts when it brings d into the cache.Case 1: d evicted at time t', before next request for d.Case 2: d requested at time t' before d is evicted. Theorem. FF is optimal eviction algorithm.Pf. (by induction on number or requests j)Invariant: There exists an optimal reduced schedule S that makesthe same eviction schedule as SFF through the first j 1 requests. Let S be reduced schedule that satisfies invariant through j requests.We produce S' that satisfies invariant after j 1 requests.Consider (j 1)st request d dj 1.Since S and SFF have agreed up until now, they have the same cachecontents before request j 1.Case 1: (d is already in the cache). S' S satisfies invariant.Case 2: (d is not in the cache and S and SFF evict the same element).S' S satisfies invariant. SS'ctScttd c tdt't'eS'cd evicted at time t',before next request t' t'ed requested at time t'Case 1dCase 22930Farthest-In-Future: AnalysisFarthest-In-Future: AnalysisPf. (continued)Case 3: (d is not in the cache; SFF evicts e; S evicts f e).– begin construction of S' from S by evicting e instead of fLet j' be the first time after j 1 that S and S' take a different action,and let g be item requested at time j'. must involve e or f (or both)j'jsameefsameSefSS' j 1jsameeSdsamedfS' –esamenow S' agrees with SFF on first j 1 requests; we show that havingelement f in cache is no worse than having element efsameS'Case 3a: g e. Can't happen with Farthest-In-Future since theremust be a request for f before e.Case 3b: g f. Element f can't be in cache of S, so let e' be theelement that S evicts.– if e' e, S' accesses f from cache; now S and S' have same cache– if e' e, S' evicts e' and brings e into the cache; now S and S'have the same cacheNote: S' is no longer reduced, but can be transformed intoa reduced schedule that agrees with SFF through step j 13132

Farthest-In-Future: AnalysisCaching PerspectiveLet j' be the first time after j 1 that S and S' take a different action,and let g be item requested at time j'.Online vs. offline algorithms.Offline: full sequence of requests is known a priori.Online (reality): requests are not known in advance.Caching is among most fundamental online problems in CS. must involve e or f (or both) j'esamefsameS S'LIFO. Evict page brought in most recently.LRU. Evict page whose most recent access was earliest.otherwise S' would take the same action Case 3c: g e, f. S must evict e.Make S' evict f; now S and S' have the same cache. j'gsameSFF with direction of time reversed!Theorem. FF is optimal offline eviction algorithm.Provides basis for understanding and analyzing online algorithms.LRU is k-competitive. [Section 13.8]LIFO is arbitrarily bad.gsame S' 3334Shortest Path Problem4.4 Shortest Paths in a GraphShortest path network.Directed graph G (V, E).Source s, destination t.Length le length of edge e. Shortest path problem: find shortest directed path from s to t.cost of path sum of edge costs in path2329s31814301511551620shortest path from Princeton CS department to Einstein's house762644419Cost of path s-2-3-5-t 9 23 2 16 50.6t36

Dijkstra's AlgorithmDijkstra's AlgorithmDijkstra's algorithm.Maintain a set of explored nodes S for which we have determinedthe shortest path distance d(u) from s to u.Initialize S { s }, d(s) 0.Repeatedly choose unexplored node v which minimizesDijkstra's algorithm.Maintain a set of explored nodes S for which we have determinedthe shortest path distance d(u) from s to u.Initialize S { s }, d(s) 0.Repeatedly choose unexplored node v which minimizes " (v ) mine (u , v ) : u ! Sd (u ) l e ,add v to S, and set d(v) π(v).d(u) " (v ) d (u ) l e ,add v to S, and set d(v) π(v).shortest path to some u in exploredpart, followed by a single edge (u, v)lemine (u , v ) : u ! Svled(u)ushortest path to some u in exploredpart, followed by a single edge (u, v)vuSSss3738Dijkstra's Algorithm: Proof of CorrectnessDijkstra's Algorithm: ImplementationInvariant. For each node u S, d(u) is the length of the shortest s-u path.Pf. (by induction on S )Base case: S 1 is trivial.Inductive hypothesis: Assume true for S k 1.Let v be next node added to S, and let u-v be the chosen edge.The shortest s-u path plus (u, v) is an s-v path of length π(v).Consider any s-v path P. We'll see that it's no shorter than π(v).Let x-y be the first edge in P that leaves S,Pand let P' be the subpath to x.yxP is already too long as soon as it leaves S.P'For each unexplored node, explicitly maintain " (v) d (u) l e .mine (u,v) : u # SNext node to explore node with minimum π(v).When exploring v, for each incident edge e (v, w), update!" (w) min { " (w), " (v) l e }. Efficient implementation. Maintain a priority queue of unexplored ! nodes, prioritized by π(v). Priority Queue sSl (P) l (P') l (x,y) d(x) l (x, y) π(y) π(v)nonnegativeweightsinductivehypothesisdefn of π(y)uvPQ OperationDijkstraArrayBinary heapd-way HeapInsertnnlog nd log d n1ExtractMinnnlog nd log d nlog nChangeKeym1log nlog d n1IsEmptyn111n2m log nTotalDijkstra chose vinstead of yFib heapm log†1m/n nm n log n† Individual ops are amortized bounds3940

Edsger W. DijkstraExtra SlidesThe question of whether computers can think is like thequestion of whether submarines can swim.Do only what only you can do.In their capacity as a tool, computers will be but a rippleon the surface of our culture. In their capacity asintellectual challenge, they are without precedent in thecultural history of mankind.The use of COBOL cripples the mind; its teaching should,therefore, be regarded as a criminal offence.APL is a mistake, carried through to perfection. It is thelanguage of the future for the programming techniquesof the past: it creates a new generation of coding bums.41Coin ChangingCoin ChangingGoal. Given currency denominations: 1, 5, 10, 25, 100, devise a methodto pay amount to customer using fewest number of coins.Ex: 34 .Greed is good. Greed is right. Greed works.Greed clarifies, cuts through, and captures theessence of the evolutionary spirit.- Gordon Gecko (Michael Douglas)Cashier's algorithm. At each iteration, add coin of the largest valuethat does not take us past the amount to be paid.Ex: 2.89.44

Coin-Changing: Greedy AlgorithmCoin-Changing: Analysis of Greedy AlgorithmCashier's algorithm. At each iteration, add coin of the largest valuethat does not take us past the amount to be paid.Theorem. Greedy algorithm is optimal for U.S. coinage: 1, 5, 10, 25, 100.Pf. (by induction on x)Consider optimal way to change ck x ck 1 : greedy takes coin k.We claim that any optimal solution must also take coin k.– if not, it needs enough coins of type c1, , ck-1 to add up to x– table below indicates no optimal solution can do thisProblem reduces to coin-changing x - ck cents, which, by induction, isoptimally solved by greedy algorithm. Sort coins denominations by value: c1 c2 cn.coins selectedS φwhile (x 0) {let k be largest integer such that ck xif (k 0)return "no solution found"x x - ckS S {k}}return S kQ. Is cashier's algorithm optimal?ckAll optimal solutionsmust satisfyMax value of coins1, 2, , k-1 in any OPT11P 4-25N 14310N D 24 5 9425Q 320 4 245100no limit75 24 994546Coin-Changing: Analysis of Greedy AlgorithmSelecting BreakpointsObservation. Greedy algorithm is sub-optimal for US postaldenominations: 1, 10, 21, 34, 70, 100, 350, 1225, 1500.Counterexample. 140 .Greedy: 100, 34, 1, 1, 1, 1, 1, 1.Optimal: 70, 70. 47

Selecting BreakpointsSelecting Breakpoints: Greedy AlgorithmSelecting breakpoints.Road trip from Princeton to Palo Alto along fixed route.Refueling stations at certain points along the way.Fuel capacity C.Goal: makes as few refueling stops as possible.Truck driver's algorithm. Sort breakpoints so that: 0 b0 b1 b2 . bn L Greedy algorithm. Go as far as you can before refueling.CCPrincetonC123C4while (x bn)let p be largest integer such that bp x Cif (bp x)return "no solution"x bpS S {p}return SCCC5breakpoints selectedcurrent locationS {0}x 0 Palo Alto6Implementation. O(n log n)Use binary search to select each breakpoint p.7 4950Selecting Breakpoints: CorrectnessSelecting Breakpoints: CorrectnessTheorem. Greedy algorithm is optimal.Theorem. Greedy algorithm is optimal.Pf. (by contradiction)Assume greedy is not optimal, and let's see what happens.Let 0 g0 g1 . . . gp L denote set of breakpoints chosen by greedy.Let 0 f0 f1 . . . fq L denote set of breakpoints in an optimalsolution with f0 g0, f1 g1 , . . . , fr gr for largest possible value of r.Note: gr 1 fr 1 by greedy choice of algorithm.Pf. (by contradiction)Assume greedy is not optimal, and let's see what happens.Let 0 g0 g1 . . . gp L denote set of breakpoints chosen by greedy.Let 0 f0 f1 . . . fq L denote set of breakpoints in an optimalsolution with f0 g0, f1 g1 , . . . , fr gr for largest possible value of r.Note: gr 1 fr 1 by greedy choice of algorithm. g0g1g2 gr 1grGreedy:g0g1g2grf0f1f2frgr 1Greedy:.OPT:f0f1f2frfr 1why doesn't optimal solutiondrive a little further?.OPT:fq51fqanother optimal solution hasone more breakpoint in common contradiction52

[Smallest slack] Consider jobs in ascending order of slack d j - t. 18 Greedy template. Consider jobs in some order.! [Shortest processing time first] Consider jobs in ascending order of processing time tj.! [Smallest slack] Consider jobs in ascending order of slack d j - t. counterexample

Related Documents:

Part One: Heir of Ash Chapter 1 Chapter 2 Chapter 3 Chapter 4 Chapter 5 Chapter 6 Chapter 7 Chapter 8 Chapter 9 Chapter 10 Chapter 11 Chapter 12 Chapter 13 Chapter 14 Chapter 15 Chapter 16 Chapter 17 Chapter 18 Chapter 19 Chapter 20 Chapter 21 Chapter 22 Chapter 23 Chapter 24 Chapter 25 Chapter 26 Chapter 27 Chapter 28 Chapter 29 Chapter 30 .

TO KILL A MOCKINGBIRD. Contents Dedication Epigraph Part One Chapter 1 Chapter 2 Chapter 3 Chapter 4 Chapter 5 Chapter 6 Chapter 7 Chapter 8 Chapter 9 Chapter 10 Chapter 11 Part Two Chapter 12 Chapter 13 Chapter 14 Chapter 15 Chapter 16 Chapter 17 Chapter 18. Chapter 19 Chapter 20 Chapter 21 Chapter 22 Chapter 23 Chapter 24 Chapter 25 Chapter 26

DEDICATION PART ONE Chapter 1 Chapter 2 Chapter 3 Chapter 4 Chapter 5 Chapter 6 Chapter 7 Chapter 8 Chapter 9 Chapter 10 Chapter 11 PART TWO Chapter 12 Chapter 13 Chapter 14 Chapter 15 Chapter 16 Chapter 17 Chapter 18 Chapter 19 Chapter 20 Chapter 21 Chapter 22 Chapter 23 .

Production scheduling methods can be further classified as static scheduling and dynamic scheduling. Static operation scheduling is performed during the compilation of the application. Once an acceptable scheduling solution is found, it is deployed as part of the application image. In dynamic scheduling, a dedicated system component makes

application to the occurrences of the observed scheduling problems. Keywords: scheduling; class-scheduling; collaborative scheduling; web-based scheduling . 1. Introduction . Academic institutions and universities often find difficulties in scheduling classes [1]. This difficult task is devoted with hefty amount of time, human, and material .

scheduling, focusing on scheduling problems that increased in complexity based on the number of activities to be scheduled and the number of planning constraints. Nine non-expert planners were recruited to complete scheduling tasks using Playbook, a scheduling software. The results of this pilot study show that scheduling

Apr 10, 2014 · Operating System Concepts! 6.1! Silberschatz, Galvin and Gagne 2002 Chapter 5: CPU Scheduling! Basic Concepts! Scheduling Criteria ! Scheduling Algorithms! Multiple-Processor Scheduling! Real-Time Scheduling! Algorithm Evaluation!

Using Scheduling Center Chapter 1 Getting Started 1 Getting Started Overview Scheduling Center The Scheduling Center is a product used with Recruiting to handle automated and high volume scheduling of candidates. This add-on scheduling functionality allows users to schedule any number of screening functions for candidates, including but not .