International Journal Of High Performance Computing And Networking 1 .

1y ago
1 Views
1 Downloads
1.17 MB
20 Pages
Last View : 21d ago
Last Download : 2m ago
Upload by : Axel Lin
Transcription

INTERNATIONAL JOURNAL OF HIGH PERFORMANCE COMPUTING AND NETWORKING1Selective Preemption Strategies for Parallel JobSchedulingRajkumar Kettimuthu, Vijay Subramani, Srividya Srinivasan, Thiagaraja Gopalsamy, D. K. Panda, andP. SadayappanAbstract— Although theoretical results have been establishedregarding the utility of preemptive scheduling in reducing average job turnaround time, job suspension/restart is not muchused in practice at supercomputer centers for parallel jobscheduling. A number of questions remain unanswered regardingthe practical utility of preemptive scheduling. We explore thisissue through a simulation-based study, using real job logs fromsupercomputer centers. We develop a tunable selective-suspensionstrategy and demonstrate its effectiveness. We also present newinsights into the effect of preemptive scheduling on differentjob classes and deal with the impact of suspensions on worstcase response time. Further, we analyze the performance of theproposed schemes under different load conditions.Index Terms— Preemptive scheduling, Parallel job scheduling,Backfilling.I. I NTRODUCTIONAlthough theoretical results have been established regarding the effectiveness of preemptive scheduling strategies inreducing average job turnaround time [1]–[5], preemptivescheduling is not currently used for scheduling parallel jobsat supercomputer centers. Compared to the large number ofstudies that have investigated nonpreemptive scheduling ofparallel jobs [6]–[21], little research has been reported onevaluation of preemptive scheduling strategies using real joblogs [22]–[25]. The basic idea behind preemptive schedulingis simple: If a long-running job is temporarily suspended anda waiting short job is allowed to run to completion first, thewait time of the short job is significantly decreased, withoutmuch fractional increase in the turnaround time of the long job. Consider a long job with run time . After time t, let a short job arrive with run time . If the short job were to runafter completion of the long job, the average turnaround job time would be, or. Instead, ifthe long job were suspended when the short job arrived, the turnaround times of the short and long jobs would be and, respectively, giving an average of !" . The#&%average turnaround time with suspension is less if,R. Kettimuthu is with Argonne National Laboratory.V. Subramani and S. Srinivasan are with Microsoft Corporation.T. Gopalsamy is with Altera Corporation.D. K. Panda and P. Sadayappan are with the Ohio State University.that is, the remaining run time of the running job is greaterthan the run time of the waiting job.The suspension criterion has to be chosen carefully to ensurefreedom from starvation. Also, the suspension scheme shouldbring down the average turnaround times without increasingthe worst-case turnaround times. Even though theoretical results [1]–[5] have established that preemption improves theaverage turnaround time, it is important to perform evaluationsof preemptive scheduling schemes using realistic job mixesderived from actual job logs from supercomputer centers, tounderstand the effect of suspension on various categories ofjobs.The primary contributions of this work are as follows:' Development of a selective-suspension strategy for preemptive scheduling of parallel jobs,' Characterization of the significant variability in the average job turnaround time for different job categories,' Demonstration of the impact of suspension on the worstcase turnaround times of various categories, and development of a tunable scheme to improve worst-caseturnaround times.This paper is organized as follows. Section II providesbackground on parallel job scheduling and discusses priorwork on preemptive job scheduling. Section III characterizesthe workload used for the simulations. Section IV presents theproposed selective preemption strategies and evaluates theirperformance under the assumption of accurate estimation ofjob run times. Section V studies the impact of inaccuraciesin user estimates of run time on the selective preemptionstrategies. It also models the overhead for job suspension andrestart and evaluates the proposed schemes in the presenceof overhead. Section VI describes the performance of theselective preemption strategies under different load conditions.Section VII summarizes the results of this work.II. BACKGROUNDANDR ELATED W ORKScheduling of parallel jobs is usually viewed in terms ofa 2D chart with time along one axis and the number ofprocessors along the other axis. Each job can be thought ofas a rectangle whose width is the user-estimated run time

INTERNATIONAL JOURNAL OF HIGH PERFORMANCE COMPUTING AND NETWORKINGand height is the number of processors requested. Paralleljob scheduling strategies have been widely studied in the past[26]–[33]. The simplest way to schedule jobs is to use the firstcome-first-served (FCFS) policy. This approach suffers fromlow system utilization, however, because of fragmentation ofthe available processors. Consider a scenario where a fewjobs are running in the system and many processors are idle,but the next queued job requires all the processors in thesystem. An FCFS scheduler would leave the free processorsidle even if there were waiting queued jobs requiring only afew processors. Some solutions to this problem are to use dynamic partitioning [34] or gang scheduling [35]. An alternativeapproach to improve the system utilization is backfilling.A. BackfillingBackfilling was developed for the IBM SP1 parallel supercomputer as part of the Extensible Argonne SchedulingsYstem (EASY) [13] and has been implemented in severalproduction schedulers [36], [37]. Backfilling works by identifying “holes” in the 2D schedule and moving forward smallerjobs that fit those holes. With backfilling, users are requiredto provide an estimate of the length of the jobs submittedfor execution. This information is used by the scheduler topredict when the next queued job will be able to run. Thus, ascheduler can determine whether a job is sufficiently small torun without delaying any previously reserved jobs.It is desirable that a scheduler with backfilling support twoconflicting goals. On the one hand, it is important to moveforward as many short jobs as possible, in order to improveutilization and responsiveness. On the other hand, it is alsoimportant to avoid starvation of large jobs and, in particular,to be able to predict when each job will run. There are twocommon variants to backfilling — conservative and aggressive(EASY) — that attempt to balance these goals in differentways.1) Conservative Backfilling: With conservative backfilling,every job is given a reservation (start time guarantee) whenit enters the system. A smaller job is allowed to backfill onlyif it does not delay any previously queued job. Thus, when anew job arrives, the following allocation procedure is executedby a conservative backfilling scheduler. Based on the currentknowledge of the system state, the scheduler finds the earliesttime at which a sufficient number of processors are availableto run the job for a duration equal to the user-estimated runtime. This is called the “anchor point.” The scheduler thenupdates the system state to reflect the allocation of processorsto this job starting from its anchor point. If the job’s anchorpoint is the current time, the job is started immediately.An example is given in Fig. 1. The first job in the queuedoes not have enough processors to run. Hence, a reservationis made for it at the anticipated termination time of the22nodes13timerunningjobs21queuedjobsFig. 1.3Conservative backfilling.longer-running job. Similarly, the second queued job is givena reservation at the anticipated termination time of the firstqueued job. Although enough processors are available forthe third queued job to start immediately, it would delay thesecond job; therefore, the third job is given a reservation afterthe second queued job’s anticipated termination time.Thus, in conservative backfilling, jobs are assigned a starttime when they are submitted, based on the current usageprofile. But they may actually be able to run sooner if previousjobs terminate earlier than expected. In this scenario, theoriginal schedule is compressed by releasing the existingreservations one by one, when a running job terminates, inthe order of increasing reservation start time guarantees andattempting backfill for the released job. If as a result of earlytermination of some job, “holes” of the right size are createdfor a job, then it gets an earlier reservation. In the worst case,each released job is reinserted in the same position it heldpreviously. With this scheme, there is no danger of starvation,since a reservation is made for each job when it is submitted.2) Aggressive Backfilling: Conservative backfilling movesjobs forward only if they do not delay any previously queuedjob. Aggressive backfilling takes a more aggressive approachand allows jobs to skip ahead provided they do not delay thejob at the head of the queue. The objective is to improvethe current utilization as much as possible, subject to someconsideration for the queue order. The price is that executionguarantees cannot be made, because it is impossible to predicthow much each job will be delayed in the queue.An aggressive backfilling scheduler scans the queue of waiting jobs and allocates processors as requested. The schedulergives a reservation guarantee to the first job in the queue thatdoes not have enough processors to start. This reservation isgiven at the earliest time at which the required processors areexpected to become free, based on the current system state.

INTERNATIONAL JOURNAL OF HIGH PERFORMANCE COMPUTING AND obsFig. 2.3Aggressive backfilling.The scheduler then attempts to backfill the other queued jobs.To be eligible for backfilling, a job must require no more thanthe currently available processors and must satisfy either oftwo conditions that guarantee it will not delay the first job inthe queue:' It must terminate by the time the first queued job isscheduled to commence, or' It must use no more nodes than those are free at the timethe first queued job is scheduled to start.Figure 2 shows an example.B. MetricsTwo common metrics used to evaluate the performance ofscheduling schemes are the average turnaround time and theaverage bounded slowdown. We use these metrics for ourstudies. The bounded slowdown [38] of a job is defined asfollows: % % ! % " # % '& ! &% " # %(1)The threshold of 10 seconds is used to limit the influence ofvery short jobs on the metric.Preemptive scheduling aims at providing lower delay toshort jobs relative to long jobs. Since long jobs have greatertolerance to delays as compared to short jobs, our suspensioncriterion is based on the expansion factor (xfactor), whichincreases rapidly for short jobs and gradually for long jobs.!() * %, -. % % 0/ % 1 2 ,% 3- % ' &/ % 1 2 %, 3- % (2)3C. Related WorkAlthough preemptive scheduling is universally used at theoperating system level to multiplex processes on singleprocessor systems and shared-memory multi-processors, it israrely used in parallel job scheduling. A large number ofstudies have addressed the problem of parallel job scheduling(see [38] for a survey of work on this topic), but most of themaddress nonpreemptive scheduling strategies. Further, most ofthe work on preemptive scheduling of parallel jobs considersthe jobs to be malleable [3], [25], [39], [40]; in other words,the number of processors used to execute the job is permittedto vary dynamically over time.In practice, parallel jobs submitted to supercomputer centersare generally rigid; that is, the number of processors usedto execute a job is fixed. Under this scenario, the variousschemes proposed for a malleable job model are inapplicable.Few studies have addressed preemptive scheduling under amodel of rigid jobs, where the preemption is “local,” that is,the suspended job must be restarted on exactly the same setof processors on which they were suspended.Chiang and Vernon [23] evaluate a preemptive schedulingstrategy called “immediate service (IS)” for shared-memorysystems. With this strategy, each arriving job is given animmediate timeslice of 10 minutes, by suspending one or morerunning jobs if needed. The selection of jobs for suspensionis based on their instantaneous-xfactor, defined as (wait time total accumulated run time) / (total accumulated run time).Jobs with the lowest instantaneous-xfactor are suspended. TheIS strategy significantly decreases the average job slowdownfor the traces simulated. A potential shortcoming of the ISstrategy, however, is that its preemption decisions do notreflect the expected run time of a job. The IS strategy canbe expected to significantly improve the slowdown of abortedjobs in the trace. Hence, it is unclear how much, if any, ofthe improvement in slowdown is experienced by the jobs thatcompleted normally. However, no information is provided onhow different job categories are affected.Chiang et al. [22] examine the run-to-completion policywith a suspension policy that allows a job to be suspendedat most once. Both this approach and the IS strategy limit thenumber of suspensions, whereas we use a “suspension factor”to control the rate of suspensions, without limiting the numberof times a job can be suspended.Parsons and Sevcik [25] discuss the design and implementation of a number of multiprocessor preemptive schedulingdisciplines. They study the effect of preemption under themodels of rigid, migratable, and malleable jobs. They concludethat their proposed preemption scheme may increase theresponse time for the model of rigid jobs.So far, few simulation-based studies have been done on preemption strategies for clusters. With no process migration, the

INTERNATIONAL JOURNAL OF HIGH PERFORMANCE COMPUTING AND NETWORKINGdistributed-memory systems impose an additional constraintthat a suspended job should get the same set of processorswhen it restarts. In this paper, we propose tunable suspensionstrategies for parallel job scheduling in environments whereprocess migration is not feasible.III. W ORKLOAD C HARACTERIZATIONWe perform simulation studies using a locally developedsimulator with workload logs from different supercomputercenters. Most supercomputer centers keep a trace file as arecord of the scheduling events that occur in the system. Thisfile contains information about each job submitted and itsactual execution. Typically the following data is recorded foreach job:' Name of job, user name, and so forth' Job submission time' Job resources requested, such as memory and processors' User-estimated run time' Time when job started execution' Time when job finished executionfrom a 430-node IBM SP2 system at the Cornell TheoryCenter, the SDSC trace from a 128-node IBM SP2 system atthe San Diego Supercomputer Center, and the KTH trace froma 100-node IBM SP2 system at the Swedish Royal Instituteof Technology. The other traces did not contain user estimatesof run time. We observed similar performance trends with allthe three traces. In order to minimize the number of graphs,we report the performance results for CTC and SDSC tracesalone. This selection is purely arbitrary.Although user estimates are known to be quite inaccuratein practice, as explained above, we first studied the effectof preemptive scheduling under the idealized assumption ofaccurate estimation, before studying the effect of inaccuraciesin user estimates of job run time. Also, we first studied theimpact of preemption under the assumption that the overheadfor job suspension and restart were negligible and then studiedthe influence of the overhead.TABLE IVAVERAGE SLOWDOWN FOR VARIOUS CATEGORIES WITH NONPREEMPTIVESCHEDULINGTABLE IJ OB0 - 10 min10 min - 1 hr1 hr - 8 hr8 hrCATEGORIZATION CRITERIA1 ProcVS SeqS SeqL SeqVL Seq2-8 ProcsVS NSNLNVL N9-32 ProcsVS WSWLWVL W32 ProcsVS VWS VWL VWVL VW40 - 10 min10 min - 1 hr1 hr - 8 hr8 hr1 Proc2.61.261.131.03- CTC T RACE2-8 Procs4.761.761.431.059-32 Procs13.013.041.881.0932 Procs34.077.141.631.15TABLE VAVERAGE SLOWDOWN FOR VARIOUS CATEGORIES WITH NONPREEMPTIVESCHEDULING -TABLE IIJ OB DISTRIBUTION BY0 - 10 min10 min - 1 hr1 hr - 8 hr8 hr1 Proc14%18%6%2%CATEGORY2-8 Procs8%4%3%2%- CTC T RACE9-32 Procs13%6%9%1%32 Procs9%2%2%1%TABLE IIIJ OB DISTRIBUTION BY CATEGORY - SDSC T RACE0 - 10 min10 min - 1 hr1 hr - 8 hr8 hr1 Proc8%2%8%3%2-8 Procs29%8%5%5%9-32 Procs9%5%6%3%32 Procs4%3%1%1%From the collection of workload logs available from Feitelson’s archive [41], subsets of the CTC workload trace, theSDSC workload trace and the KTH workload trace were usedto evaluate the various schemes. The CTC trace was logged0 - 10 min10 min - 1 hr1 hr - 8 hr8 hr1 Proc2.531.151.191.03SDSC T RACE2-8 Procs14.412.431.241.099-32 Procs37.784.831.961.1832 Procs113.3115.562.791.43Any analysis that is based only on the average slowdownor turnaround time of all jobs in the system cannot provideinsights into the variability within different job categories.Therefore, in our discussion, we classify the jobs into variouscategories based on the run time and the number of processorsrequested, and we analyze the slowdown and turnaround timefor each category.To analyze the performance of jobs of different sizes andlengths, we classified jobs into 16 categories: consideringfour partitions for run time — Very Short (VS), Short (S),Long (L) and Very Long (VL) — and four partitions for thenumber of processors requested — Sequential (Seq), Narrow(N), Wide (W) and Very Wide (VW). The criteria used for jobclassification are shown in Table I. The distribution of jobs in

INTERNATIONAL JOURNAL OF HIGH PERFORMANCE COMPUTING AND NETWORKINGthe trace, corresponding to the sixteen categories, is given inTables II and III.Tables IV and V show the average slowdowns for thedifferent job categories under a nonpreemptive aggressivebackfilling strategy. The overall slowdown for the CTC tracewas 3.58, and for the SDSC trace was 14.13. Even though theoverall slowdowns are low, from the tables one can observethat some of the Very Short categories have slowdowns ashigh as 34 (CTC trace) and 113 (SDSC trace). Preemptivestrategies aim at reducing the high average slowdowns for theshort categories without significant degradation to long jobs.IV. S ELECTIVE S USPENSIONWe first propose a preemptive scheduling scheme called Selective Suspension (SS), where an idle job may preempt a running job if its “suspension priority” is sufficiently higher thanthe running job. An idle job attempts to suspend a collection ofrunning jobs so as to obtain enough free processors. In orderto control the rate of suspensions, a suspension factor (SF)is used. This specifies the minimum ratio of the suspensionpriority of a candidate idle job to the suspension priority of arunning job for preemption to occur. The suspension priorityused is the xfactor of the job.the entire system for execution, with the system being freewhen the two tasks are submitted. Let “s” be the suspensionfactor. Before starting, both tasks have a suspension priorityof 1. The suspension priority of a task remains constant whenthe task executes and increases when the task waits. One ofthe two tasks, say , will start instantly. The other task, say , will wait until its suspension priority becomes s times the priority ofbefore it can preempt . Nowwill haveto wait until its suspension priority becomes s times before it can preempt . Thus, execution of the two taskswill alternate, controlled by the suspension factor. Figures 4, 5, and 6 show the execution pattern of the tasksand forvarious values of SF. The optimal value for SF, to restrict thenumber of repeated suspensions by two similar tasks arrivingat the same time, can be obtained as follows:Let represent the suspension priority of the waiting joband represent the suspension priority of the running job.The condition for the first suspension is s.The preemption swaps the running job and the waiting job.Thus, after the preemption, 1 and s.The condition for the second suspension is . Similarly, the condition for thesuspension is .The lowest value of s for which at most n suspensions occuris given by , when the running job completes. When the running jobcompletes, ; that is, 2, since the wait time of the waiting job therun time of the running job 2 and s .Thus, if the number of suspensions is to be 0, then s 2. With s 1, the numberFor at most one suspension, s of suspensions is very large, bounded only by the granularityof the preemption routine.With all jobs having equal length, any suspension factorgreater than 2 will not result in suspension and will be thesame as a suspension factor of 2. However, with jobs ofvarying length, the number of suspensions reduces with highersuspension factors. Thus, to avoid thrashing and to reduce thenumber of suspensions, we use different suspension factorsbetween 1.5 and 5 in evaluating our schemes. A. Theoretical AnalysisTask T1NLTask T2NLFig. 3. Two simultaneously submitted tasks T1 and T2, each requiring ‘N’processors for ‘L’ seconds. 5Letand be two tasks submitted to the scheduler at thesame time. Let both tasks be of the same length and requireB. Preventing Starvation without Reservation GuaranteesWith priority-based suspension, an idle job can preempt arunning job only if its priority is at least SF times greater thanthe priority of the running job. All the idle jobs that are ableto find the required number of processors by suspending lower

INTERNATIONAL JOURNAL OF HIGH PERFORMANCE COMPUTING AND NETWORKINGT1 T2 T1 T2 T1 T2 T1Nodes(N)T1 T2 T1 T2T2.tFig. 4.6t .TimeExecution pattern of the tasks T1 and T2 when SF 1. Here, t represents the minimum time interval between two suspensions.T2T1T2T1T2Nodes(N)T1.Time Fig. 5.Execution pattern of the tasks T1 and T2 when 1SF .T2T1Nodes(N)T1L Fig. 6.Execution pattern of the tasks T1 and T2 when SF Time .priority running jobs are selected for execution by preemptingthe corresponding jobs. All backfilling scheduling schemesuse job reservations for one or more jobs at the head ofthe idle queue as a means of guaranteeing finite progressand thereby avoiding starvation. But start time guarantees donot have much significance in a preemptive context. Evenif we give start time guarantees for the jobs in the idlequeue, they are not guaranteed to run to completion. Sincethe SS strategy uses the expected slowdown (xfactor) as thesuspension priority, there is an automatic guarantee of freedomfrom starvation: ultimately any job’s xfactor will get largeenough that it will be able to preempt some running job(s)and begin execution. Thus, one can use backfilling without theusual reservation guarantees. We therefore remove guaranteesfor all our preemption schemes.Jobs in some categories inherently have a higher probabilityof waiting longer in the queue than do jobs with comparablexfactor from other job categories. For example, consider aVW job needing 300 processors, and a Sequential job in thequeue at the same time. If both jobs have the same xfactor,the probability that the Sequential job finds a running job tosuspend is higher than the probability that the VW job findsenough lower-priority running jobs to suspend. Therefore, theaverage slowdown of the VW category will tend to be higherthan the Sequential category. To redress this inequity, weimpose a restriction that the number of processors requestedby a suspending job should be at least half of the numberof processors requested by the job that it suspends, therebypreventing the wide jobs from being suspended by the narrowjobs. The scheduler periodically (after every minute) invokesthe preemption routine.

INTERNATIONAL JOURNAL OF HIGH PERFORMANCE COMPUTING AND NETWORKINGVery Short7.14SF 1.54SF 23SF 52No SuspensionIS1SF 1.54SF 23SF 52No WideLongISSF 1.52SF 21.5SF 5No Suspension1ISSeqVeryWideNarrowWideVery ShortSF 2800SF 5600No Suspension400IS2000WideVeryWideNo SuspensionISWideSF 5No SuspensionISWideSF 2SF 530000No Suspension20000IS10000Wide400IS0VeryWideSF 1.5SF 2SF 5No SuspensionISSeqNarrow Task can be scheduled by preempting one or more tasksin if and only ifVeryWideVery Long24000SF 1.520000SF 216000SF 512000No 000100000SF 2SF 5No SuspensionISSeqVeryWide1350709SF 1.5NarrowWideVeryWideWidthWidth( % Wide32000Fig. 10. Average turnaround time: SS scheme, SDSC trace. The trends aresimilar to those with the average slowdown metric (Fig. 9). % No SuspensionSeqLet be the suspension priority for a task which requestsLet represent the set of processors allocated processors.%to . Let represent the set of free processors and represent the number of free processors at time t when thepreemption is attempted.%The set of tasks that can be preempted by task is givenby%800Narrow17329 20097237060SF 5VeryWideFig. 8. Average turnaround time: SS scheme, CTC trace. The trends aresimilar to those with the average slowdown metric (Fig. 7). Short69479SF 000Long50000Wide17449SF 2WidthC. 000SeqVeryWideWidthNo SuspensionSeqVeryWideSF 1.5Seq0NarrowWide1600VeryWide95167SF 2Narrow2000Very LongSF 1.5SF 50WidthTurnaround TimeTurnaround TimeNarrowSF 210.54344SF 5Seq72368SeqIS32254SF 00No SuspensionSF 1.521.5Fig. 9. Average slowdown: SS scheme, SDSC trace. Compared to NS,SS provides significant benefit for the VS, S, W, and VW categories; slightimprovement for most of L categories; but a slight deterioration for the VLcategories. Compared to IS, SS performs better for all the categories exceptfor the VS categories.SF 1.5Width29.425.53SF 5SeqTurnaround Time1000Turnaround TimeTurnaround TimeSF 1.5VeryWide2.5Very eVery LongSF 2Short1200NarrowWidth39281400IS32.26SF 1.5VeryWideFig. 7. Average slowdown: SS scheme, CTC trace. Compared to NS, SSprovides significant benefit for the VS, S, W, and VW categories; slightimprovement for most of L categories; but a slight deterioration for the VLcategories. Compared to IS, SS performs better for all the categories exceptfor the VS categories.1600No Suspension3WidthTurnaround TimeWideWidthSF 5Width32.521.510.5000SF 2SeqVeryWide6.930.50.5SF 1.5LongSlowdownNo SuspensionSlowdownSlowdownSF 51Wide43.52.5SF 221.5Narrow3.58SF 1.5NarrowISVery Long2.5SeqNo SuspensionSeq3NarrowSF 51614121086420Width7.59SeqSF 2VeryWide192.5926.72SF round own13.016Turnaround TimeVery Short7 " % " % # " % & " % '" & % " ) ' ) %1( ! " *) )( Letbe the elements of . Letbe a permutation of (1,2,3,. . . ,x) such that . (If , then % % . If , then the% start time ofthe start time of% . If the start time of the start time of, then the% % queue time ofthe queue time ofSo, )#' )" ,.- # - / 0 2 1 ) ) #( 3!

INTERNATIONAL JOURNAL OF HIGH PERFORMANCE COMPUTING AND NETWORKING80SF 260No SuspensionIS40200NarrowWideVeryWideSF 240No Suspension30IS20100SeqNarrow96.693.49ISSeqNarrowSF 2No SuspensionISWideVeryWideNo SuspensionISNarrowSF 230SF 2 Tuned25No 993.49SF 2SF 2 TunedNo SuspensionISSeqNarrowWide27.58SF 2SF 2 TunedNo SuspensionISVeryWideSeqNarrowVery ShortSF 26000No 00002500020000150001000050000SF 2No SuspensionIS10000SF 28000SF 2 Tuned6000No VeryWideNarrowWide107743 0SF 2SF 2 TunedNo SuspensionISSeqVeryWideNarrow112165150000SF 240000No 31711958120000100000SF 280000No 438900VeryWideVery Long11216511400006000050000SF 240000SF 2 Tuned30000No Suspension20000IS10000SeqNarrowWide223338712283 1711958100000SF 280000SF 2 Tuned60000No ideWidthWidthWidthFig. 12. Worst-case turnaround time: SS scheme, CTC trace. The trends aresimilar to those with the worst-case slowdown metric (Fig. 11)." 201 ) % ) Fig. 14.Worst-case turnaround times for the TSS scheme: CTC trace.TSS improves the worst-case turnaround times for many categories withoutaffecting the worst-case tunraround times for other categories.( #% is given by # - - The set of tasks preempted by task%Worst case TAT4389006000070000Worst case TATVery LongWorst case TATWorst case 0000VeryWideShort4437112000Worst case TAT8000164701754226Worst case TATWorst case TAT10000Worst case TAT1647012000174603WideWidthFig. 13. Worst-case slowdown for the TSS scheme: CTC trace. TSS improvesthe worst-case slowdowns for many categories without affecting the worstcase slowdowns for other categories.Short10774326.65109876543210WidthFig. 11. Worst-case slowdown: SS scheme, CTC trace. SS is much betterthan NS for most of the categories and is slightly worse for some o

systems and shared-memory multi-processors, it is used in parallel job scheduling. A large number of studies have addressed the problem of parallel job scheduling (see [38] for a survey of work on this topic), but most of them address nonpreemptive scheduling strategies. Further, most of the work on preemptive scheduling of parallel jobs considers

Related Documents:

[ ] International Journal of Mechanical Engineering and Research (HY) Rs. 3500.00 [ ] International Journal of Mechanical and Material Sciences Research (HY) Rs. 3500.00 [ ] International Journal of Material Sciences and Technology (HY) Rs. 3500.00 [ ] International Journal of Advanced Mechanical Engineering (HY) Rs. 3500.00

Anatomy of a journal 1. Introduction This short activity will walk you through the different elements which form a Journal. Learning outcomes By the end of the activity you will be able to: Understand what an academic journal is Identify a journal article inside a journal Understand what a peer reviewed journal is 2. What is a journal? Firstly, let's look at a description of a .

excess returns over the risk-free rate of each portfolio, and the excess returns of the long- . Journal of Financial Economics, Journal of Financial Markets Journal of Financial Economics. Journal of Financial Economics. Journal of Financial Economics Journal of Financial Economics Journal of Financial Economics Journal of Financial Economics .

Create Accounting Journal (Manual) What are the Key Steps? Create Journal Enter Journal Details Submit the Journal Initiator will start the Create Journal task to create an accounting journal. Initiator will enter the journal details, and add/populate the journal lines, as required. *Besides the required fields, ensure at least

international journal for parasitology-parasites and wildlife england int j bank mark international journal of bank marketing england int j bus commun international journal of business communication united states int j entrep behav r international journal of entrepreneurial behaviour & research england

o Indian Journal of Biochemistry & Biophysics (IJBB) o Indian Journal of Biotechnology (IJBT) o Indian Journal of Chemistry, Sec A (IJC-A) o Indian Journal of Chemistry, Sec B (IJC-B) o Indian Journal of Chemical Technology (IJCT) o Indian Journal of Experimental Biology (IJEB) o Indian Journal of Engineering & Materials Sciences (IJEMS) .

32. Indian Journal of Anatomy & Surgery of Head, Neck & Brain 33. Indian journal of Applied Research 34. Indian Journal of Biochemistry & Biophysics 35. Indian Journal of Burns 36. Indian Journal of Cancer 37. Indian Journal of Cardiovascular Diseases in Women 38. Indian Journal of Chest Diseases and Allied Sciences 39.

Addy Note_ Creating a Journal in UCF Financials Page 1 of 18 1/30/2020 Creating a Journal in UCF Financials . This Addy Note explains how to create a journal in UCF Financials and what to do after your journal has been approved, denied, or placed on hold. Step Action 1. Navigate to: General Ledger Journals Journal Entry Create/Update Journal