Forecast Value Added Analysis: Step-by-Step

2y ago
51 Views
4 Downloads
2.07 MB
20 Pages
Last View : 1m ago
Last Download : 3m ago
Upload by : Lucca Devoe
Transcription

White PaperForecast Value Added Analysis:Step by Step

ContentsIntroduction. 1What Is Forecast Value Added?. 1The Naïve Forecast.2Sample Results.5Why Is FVA Important?. 5FVA Analysis: Step by Step. 6Mapping the Process.6Collecting the Data.7Analyzing the Process.8Reporting the Results.8Interpreting Results.9Further Application of FVA Analysis. 11Case Studies.12Academic Research. 12Home Furnishings Manufacturer. 12Pharmaceutical. 12Automotive Supplier. 13Technology Manufacturer. 13Specialty Retailer. 13Food and Beverage. 13Lean Approach to Forecasting.14Bibliography.14Appendix: Sample SAS Code forCreating FVA Report.14Sample SAS Code for FVA Report. 15About the AuthorContent for this white paper was provided byMichael Gilliland, Product Marketing Manager at SAS.

1IntroductionWhat Is Forecast Value Added?Traditional forecasting performance metrics, such as meanabsolute percent error (MAPE), tell you the size of your forecasterror. However, these metrics tell you nothing about howefficient you are at forecasting, what your error should be orwhether your efforts are making the forecast better or worse.To determine whether your forecasting efforts are makingthings better, we advocate using a simple metric calledforecast value added, or FVA. There is nothing earth-shatteringabout this metric; it just gives a name to a fundamental methodof science that is too often overlooked in business.So, what is FVA analysis, and why do we do it?Consider this example:Suppose a pharmaceutical company announces a pill for coldsand touts that after 10 days, your cold will be gone. Does thissound like a grand medical breakthrough? Will you be buyingthe pill, or investing in the company? Or has the descriptionof the pill’s curative power raised some suspicion? It should.Doesn’t a cold go away after 10 days anyway? What value issuch a pill adding? Shouldn’t we require demonstration that thepill does something worthwhile?This is exactly the kind of situation we face in forecasting; butin forecasting, we aren’t nearly suspicious enough. Why do weassume all our elaborate systems and processes are addingvalue by making the forecast better? What would happen if wedid away with them, and used just the simplest of forecastingmethods – what results would we achieve then? These are thesorts of things MAPE, by itself, will not tell you. But these are thesorts of things that FVA analysis lets you investigate.This white paper defines and illustrates the FVA calculation andprovides the details for conducting FVA analysis at your organization. Because forecasting is often a visible and politicizedfunction, and FVA results can be embarrassing, we also discusshow to most effectively present your findings to colleagues andmanagement. The white paper concludes with case studies ofseveral companies that have applied FVA analysis and publiclypresented their results.FVA is the change in a forecastingperformance metric that can beattributed to a particular step orparticipant in the forecasting process.FVA is a metric for evaluating the performance of each step andeach participant in the forecasting process. It is defined as thechange in a forecasting performance metric – whatever metricyou happen to be using, such as MAPE, forecast accuracy orbias – that can be attributed to each particular step and participant in your forecasting process.FVA is a common-sense approach that is easy to understand. Itexpresses the results of doing something versus having donenothing. FVA can be either positive or negative, telling youwhether your efforts are adding value by making the forecastbetter, or whether you are just making things Figure 1: Simple forecasting process.Let’s look at an example with a very simple forecasting process.Perhaps the simplest process is to read the demand historyinto a statistical forecasting model that generates a forecast,and then have an analyst review and (if necessary) override thestatistical forecast.In FVA analysis, you would compare the analyst’s override tothe statistically generated forecast to determine if the overridemakes the forecast better. FVA analysis also compares both thestatistical forecast and the analyst forecast to what’s called anaïve forecast. (We discuss naïve forecasts in the next section.)Suppose you found that the statistical forecast achieved a meanabsolute percent error of 25 percent, and that the analyst overrides actually reduced MAPE to 24 percent. In this case, we’dsay that the extra step of having an analyst review and adjust thestatistical forecasts is adding value by making the forecast better.The reason we measure FVA is to identify waste and inefficiencyin the forecasting process. When FVA is negative – that is, whena process activity is making the forecast worse – then clearlythis activity is a waste and should be eliminated. Eliminatingwaste saves company resources that can be directed to more

2productive activities. You also get better forecasts by eliminatingthose activities that just make the forecast worse.When FVA is negative, that is clearly a bad thing, and theprocess step or participant with negative FVA should be eliminated. But when FVA is positive, as in this example, do weconclude that the step or participant (here, the analyst override)should be kept in the process? Not necessarily.The mere fact that a process activity has positive FVA doesn’tnecessarily mean that you should keep it in your process. Youneed to compare the overall financial benefits of the improvement to the cost of that activity. Is the extra accuracy increasingyour revenue, reducing your costs or making your customershappier? In this example, the analyst override did reduce errorby one percentage point. But having to hire an analyst to reviewevery forecast can get costly, and if the improvement is only onepercentage point, is it really worth it?The Naïve ForecastFVA analysis is based on a simple, scientific method. Whena pharmaceutical company comes up with a new pill, it mustdemonstrate that the pill is safe and effective. Part of thisdemonstration is to run a controlled experiment, such asfinding 100 people with colds, randomly dividing them intotwo groups, and giving one group the new pill and the other aplacebo. If you find that those who get the pill overcome theircold much faster, and suffer less severe symptoms, then youmay conclude that the pill had an effect. If there is little difference between the groups – perhaps if everyone overcomes thecold within 10 days whether they received the pill or not – youcan probably conclude that the pill adds no value.Figure 2: Random walk model (forecast last known actual).The nice thing about applying this approach to forecasting isthat we have a placebo – something called the “naïve” forecast.Per the glossary of the Institute of Business Forecasting (IBF),a naïve forecast is something simple to compute, requiringthe minimum of effort and manipulation to prepare a forecast.There are several commonly used examples:The random walk, also called the “no-change” model, just usesyour last-known actual value as the future forecast. For example,if you sold 12 units last week, your forecast is 12. If you sell 10this week, your new forecast becomes 10.For the seasonal random walk, you can use the same periodfrom a year ago as your forecast for this year. Thus, if last yearyou sold 50 units in June and 70 units in July, your forecast forJune and July of this year would also be 50 and 70.A moving average, or other simple statistical formula, is alsosuitable to use as your naïve model, because it’s also simple tocompute and takes minimal effort. The duration of the movingaverage is up to you, although a full year of data (12 months or52 weeks) has the advantage of smoothing out any seasonality.Consider the following graphical views of naïve forecasts withmonthly data (Figures 2-5):If you use the random walk as your naïve (as shown in Figure 2),then the forecast for all future periods is the last-known actual,which in this case is 40 units in May 2008.

3If you use a seasonal random walk as your naïve (as shown inFigure 3), then the forecast for all future periods is the actual fromthe same period in the prior year. Therefore, June 2008 throughMay 2009 is forecast to look exactly like June 2007 to May 2008.Figure 3: Seasonal random walk model (forecast actual from same period last year).Figure 4 shows a 12-month moving average for the naïveforecast, which happens to be 55.4 for this sales data.Figure 4: Moving average model (forecast moving average of actuals).

4As you can see, you may get wildly different forecasts fromdifferent choices of the naïve model (shown in Figure 5).Depending on the nature of the pattern you are trying toforecast, some naïve models may forecast much better thanothers. So which one to choose? The random walk is the traditional naive model of choice and can serve as the ultimatebaseline for comparison.Figure 5: Forecasts from different naïve models.

5Sample ResultsFigure 6 gives an example of an FVA report, showing how youwould compare each process step to the naïve model.ProcessStepMAPEFVA vs.NaïveFVA erride30%-5%-10%Figure 6: An FVA report.In this case, the naïve model was able to achieve MAPE of 25percent. The statistical forecast added value by reducing MAPEfive percentage points to 20 percent. However, the analystoverride actually made the forecast worse, increasing MAPE to30 percent. The override’s FVA was negative-five (-5) percentagepoints compared to the naïve model and was negative-ten (-10)percentage points compared to the statistical forecast.You may wonder how adding human judgment to the statisticalforecast could possibly make it worse? This actually happensall the time, and research (discussed later in this paper) investigates the topic.Why Is FVA Important?We’ve seen how FVA is defined and measured, and anexample of how you can report it. But why is FVA such animportant metric?FVA is important because it helps you identify waste in yourforecasting process. By identifying and eliminating the activitiesthat do not add value (those activities that are not making theforecast better) you can streamline your process. FVA helps youensure that any resources you’ve invested in the forecastingprocess – from computer hardware and software to the time andenergy of analysts and management – are helping. If they arenot helping, then redirect the resources and the time to activities that are doing something worthwhile.The nice thing about FVA is that when you eliminate those activities that are just making the forecast worse, you can actually getbetter forecasts for free!FVA can also be used as a basis for performance comparison.Suppose you are a forecasting manager and have a bonus togive to your best forecast analyst. The traditional way to determine the best one is to compare their forecast errors. Based onthis traditional analysis, as shown in Figure 7, Analyst A is clearlythe best forecaster and deserves the bonus. But is the traditionalanalysis the correct analysis?AnalystMAPEA20%B30%C40%Figure 7: Comparinganalyst performance –traditional approach.What if we consider additional information about eachanalyst and the types of products that they have beenassigned to forecast?As shown in Figure 8, Analyst A had the lowest MAPE, but wemust note the kinds of products that were assigned: longrunning, basic items with no seasonality or promotional activity,no new items and low-demand variability. In fact, an FVAanalysis might reveal that a naïve model could have forecastthis sort of demand with a MAPE of only 10 percent, and thatAnalyst A only made the forecast worse!AnalystItemTypeItem ManyManyHigh40%50%10%Figure 8: Comparing analyst performance – FVA approach.

6For Analyst B, demand was less easy to forecast, with factorssuch as promotional activity and new items that make forecasting so difficult. FVA analysis reveals that this analyst addedno value compared to a naïve model – but at least this persondidn’t make the forecast worse.What FVA analysis reveals is that only Analyst C deserves thebonus. Even though Analyst C had the worst forecasts, with aMAPE of 40 percent, Analyst C was challenged with items thatare very difficult to forecast – short lifecycle fashion items withhigh promotional activity and high-demand variability. OnlyAnalyst C actually added value compared to a naïve model andmade the forecast better.This example reveals another thing to be wary of in traditionalperformance comparison, as you see in published forecastingbenchmarks. Don’t compare yourself, or your organization, towhat others are doing. The organization that achieves best-inclass forecast accuracy may do so because they have easierto-forecast demand, not because their process is worthy ofadmiration. The proper comparison is your performance versusa naïve model. If you are doing better, then that is good. But ifyou or your process is doing worse than a naïve model, thenyou have some serious (but fixable) problems.DemandHistoryMAPE is probably the most popular forecasting performancemetric, but by itself, is not legitimate for comparing forecastingperformance. MAPE tells you the magnitude of your error,but MAPE does not tell you what error you should be able toachieve. By itself, MAPE gives no indication of the efficiency ofyour forecasting process. To understand these things, you needto use FVA analysis.FVA Analysis: Step by StepWe’ve seen why FVA is an important metric. Now we tacklethe nuts and bolts of how to conduct FVA analysis at yourorganization.Mapping the ProcessThe first step in FVA analysis is to understand and map youroverall forecasting process. The process may be very simple (asshown in Figure 9A), with just a statistically generated forecastand a manual override. Or (as shown in Figure 9B), it can be anelaborate consensus or collaborative process, with participation from different internal departments like sales, marketingand finance. It might also include inputs from customers orsuppliers, if you are using the collaborative planning, forecasting and replenishment (CPFR) inanceExecutive ReviewP&ICApproved ForecastFigures 9A-9B: Simple and complex forecasting processes.

7Many organizations also have a final executive review step,where general managers, division presidents, or even CEOs geta chance to change the numbers before approving them. Thiscan translate into a great deal of high-cost management timespent on forecasting. But does it make the forecast any better?That is what we are trying to find out.Collecting the DataAfter you identify all of the steps and participants in your forecasting process and map the process flow, you must gatherdata. The data for FVA analysis is the forecast provided by eachparticipant and each step of the process. You need to gatherthis information at the most granular level of detail (such asan item at a location), as shown in the “Level of ForecastingHierarchy” columns in Figure 10. You also need to recordthe time bucket of the forecast, which is typically the week ormonth that you are forecasting. In addition, you must recordthe actual demand (or sales) in the time bucket that you weretrying to forecast.The “Forecast of Process Steps and Participants” columnscontain the forecasts provided by each step and participantin the process. In this example, for the very simple processwe showed earlier, you only need to gather the naïve forecast(using whatever naïve model you decide to use), the statisticalforecast generated by your forecasting software and the finalforecast that includes any manual overrides made by theforecast analyst.Figure 10 also shows what an FVA data set would look like, withvariable fields across the top and data records in each row. Ifyou want to do a one-time FVA report for just a few items, youcould do this much in Excel.However, for a thorough and ongoing FVA analysis – and tomake FVA a routine metric reported every period to management – you need much more powerful data handling, datastorage, analytics and reporting capabilities than Excel provides.Both SAS Analytics Pro and SAS Visual Data Discovery areperfect entry-level solutions for FVA analysis. For SAS ForecastServer customers, the Appendix contains sample code forgenerating a simple FVA report as a stored process.A thorough and ongoing FVA analysis requires you to capturethe forecast for each participant, at each step and in everyperiod for all of your item and location combinations. This willquickly grow into a very large amount of data to store andmaintain, so you will need software with sufficient scalabilityand capability. Analysis on this scale is definitely not somethingyou do in Excel.Level of DATEACTUALNAIVESTATOVRD.Figure 10: FVA data elements for simple process.Forecast of Process Stepsand Participants

8Analyzing the ProcessYou can do FVA analysis with whatever traditional performancemetric you are currently using, be that MAPE, forecast accuracyor anything else. Because FVA measures the change in themetric, it isn’t so important which metric you use.You must also decide what to use as your naïve forecastingmodel. The standard examples are the random walk, which iscommonly known as Naïve Forecast 1 (NF1), or the seasonalrandom walk, which is known as Naïve Forecast 2 (NF2). NF2is often preferred for seasonal data.Recall that per the definition of a naïve forecast, it should besomething simple to compute, requiring the minimum amountof effort. Some organizations have gone so far as to interpretthis to mean that any automatically generated statistical modelis suitable to use as a naïve model. Their argument is that oncethese more sophisticated models are created, there is no additional effort or cost to use them.Comparing results to an automatically generated statisticalforecast is a good practice. But it is always worthwhile to useNF1 for your ultimate point of comparison. You can’t justassume that your statistical model is better than a randomwalk. Naïve models can be surprisingly difficult to beat. Somestatistical forecasting software uses unsound methods, suchas blindly picking models that best fit the historical data ratherthan selecting models that are most appropriate for goodforecasting.Reporting the ResultsWhen reporting your results, remember that there are manycomparisons to make. You probably don’t have to report everysingle pair-wise comparison, but you should at least report FVAfor the major chronological steps in the process. Thus youwould probably want to show FVA for: Statistical forecasts versus naïve forecasts. Analyst overrides versus the statistical forecasts. Consensus or collaborative forecasts versus the analystoverrides. Executive-approved forecasts versus the consensusforecasts.You will probably find that some of these steps add value, andothers don’t. When a particular step or participant is not addingvalue, you should first try to understand why. For example, doyour statistical models need updating for better performance?Does your analyst need additional experience or training onwhen to make judgmental overrides, and when it’s best toleave the statistical forecast alone? Do certain participants inyour consensus process bias the results because of their ownpersonal agendas? Do your executives only approve forecaststhat are meeting the operating plan, and revise those forecaststhat are falling below plan?In some cases, it may be possible to improve performancewith some education or technical training for the participants.In other cases, the only solution is to eliminate the problematic step or participant from the process. Most people are notgoing to complain when they are excused from the forecastingprocess. There aren’t that many people who actually like beingresponsible for forecasting!There is no rigid, fixed way to report your FVA results, and youare encouraged to be creative in your presentation. However,the “stairstep” table in Figure 11 is a good way to start. On theleft side, you list the process steps or participants and theirperformance in terms of MAPE, or accuracy, or whatever metricyou are using. Columns to the right show the FVA from step tostep in the process.For a more elaborate process, the report layout is the same.You simply use more rows to show the additional processsteps, and more columns to show the additional comparisonsbetween steps. These reports should also indicate the hierarchical level at which you are reporting, such as the individualitem and location, or an aggregation (such as product categoryby region). You would also indicate the time frame covered inthe data.

lAnalystOverrideConsensusExecutive ReviewITEM xxx LOCATION xxx TIME: MM/YYYY - MM/YYYYCausalFactorsProcessStepMAPEFVA vs.NaïveFVA vs.StatisticalFVA vs.OverrideFVA tsP&ICApproved ForecastFigure 11: FVA report for complex process.This style of report should be easy to understand. We see thatthe overall process is adding value compared to the naïvemodel, because in the bottom row the approved forecast hasa MAPE of 10 percentage points less than the MAPE of thenaïve forecast. However, it also shows that we would have beenbetter off eliminating the executive review step, because itactually made the MAPE five percentage points worse than theconsensus forecast. It is quite common to find that executivetampering with a forecast just makes it worse.As mentioned previously, FVA versus the naïve forecast can varydepending on which naïve model you choose. For example, ifyou are dealing with seasonal demand, then a seasonal randomwalk may provide much better forecasts than a plain randomwalk. The right thing to do is decide which naïve model orcomposite of naïve models that you are going to use, and thenuse this consistently throughout your analysis.Also, be aware that naïve forecasts can be surprisingly difficult to beat. When you report your results, they may be ratherembarrassing to those participants who are failing to add value.Therefore, present the results tactfully. Your objective is toimprove the forecasting process – not to necessarily humiliateanyone. You may also want to present initial results privately, toavoid public embarrassment for the non-value adders.Interpreting ResultsThe FVA approach is intended to be objective and scientific,so you must be careful not to draw conclusions that are unwarranted by the data. For example, measuring FVA over one weekor one month does not provide enough data to draw any validconclusions. Period to period, FVA will go up and down, andover short time frames FVA may be particularly high or lowsimply due to randomness. When you express the results in atable, as we’ve shown up to this point, be sure to indicate thetime frame reported, and make sure that time frame has beenlong enough to provide meaningful results.It is ideal if you have a full year of data from which to drawconclusions. If you’ve been thoroughly tracking inputs to theforecasting process already, then you probably have the dataneeded to do the analysis right now. You can look at the lastyear of statistical forecasts, analyst overrides, consensus forecasts, executive-approved forecasts and actual results, and thencompute the FVA. Because naïve models are always easy toreconstruct for the past, you can see how well a naïve modelwould have done with your data last year.While a full year of data is ideal, if you are just starting to collectforecast data, then you might not have to wait a full year to drawconclusions. Graphical presentation of this data, using methodsfrom statistical process control, is a big help here.

10Let’s suppose that you just recently started gathering the dataneeded for FVA analysis, and so far you have 13 weeks of data.Depending on what you find, this may be enough information to draw some conclusions. We’ll look at two situationsthat you might encounter. For additional examples and ideason how to interpret and report data using an approach fromstatistical process control, see Donald Wheeler’s excellent bookUnderstanding Variation. Wheeler delivers a savage criticismof normal management analysis and reporting, exposing theshoddiness of typical business thought and decision making,and the general lack of appreciation for things like randomnessand variation.is 3.8 percentage points lower than MAPE for the statisticalforecast, so FVA is positive. It would appear that the consensusstep is adding value by delivering a forecast that has lower errorthan the statistical forecast. But is this enough data to draw adefinite conclusion – that the consensus process is a good useof resources?In this situation, you probably can’t yet draw that conclusion. As you see from all the lines, there is quite a largeamount of variation in performance of the statistical model,the consensus process, and the resulting FVA. You also seethat the FVA is positive in only six of the 13 weeks. Wheeler’sbook provides methods for assessing the amount of variation. Because the overall difference between statistical andconsensus performance is relatively small, and there is somuch variability in the results, the positive FVA may just be dueto randomness. In a case like this, you probably need to gathermore data before drawing any conclusions about the efficacyof the consensus process.Following the spirit of Wheeler’s message, let’s look at a situation you might encounter with 13 weeks of FVA data. Figure 12shows MAPE for the statistical forecast in the solid pink line,MAPE for the consensus forecast in the dotted dark blue lineand FVA for the consensus process in the dashed yellow line.Over the entire 13 weeks, MAPE for the consensus forecastMAPE: Consensus vs. ical MAPE 25.1%Consensus MAPE 21.3%Consensus vs. StatisticalFVA: Consensus vs. StatisticalFVA 3.8%40.030.0FVA20.010.00.0-10.012-20.0-30.0Figure 12: Situation 1.345678910111213

11MAPE: Consensus vs. ical MAPE 10.6%Consensus MAPE ticalFVA:Consensus vs. Statistical10.0FVA 50.0Figure 13: Situation 2.In Figure 13, we again see MAPE for the statistical forecast in thesolid orange line, MAPE for the consensus forecast in the dotteddark blue and FVA in the dashed yellow line. Here, we find thatthe consensus forecast has consistently done worse than thestatistical forecast. In this case, the FVA is very negative (averaging -18.2 percentage points), with positive FVA in only twoof the 13 weeks. The data seems to indicate that the consensusstep is not adding value, and is in fact making the forecastworse. At this point, you may want to bring these findings toyour management and try to understand why the consensusprocess is having this effect. You can start to investigate thedynamics of the consensus meeting and the political agendasof the participants. Ultimately, you must decide whether theconsensus process can be fixed to improve the value of theforecast, or whether it should be eliminated.FVA analysis lets you take an objective, scientific and datadriven approach to process analysis. The point of all this is toencourage you to at least conduct a rudimentary FVA analysisand determine whether your process is beating a naïve model.This can be done quite easily; most organizations will have thedata necessary for a limited-scale, quick-and-dirty analysis inExcel. Thorough and ongoing FVA reporting takes more effort,more data and more robust software tools (and perhaps evenIT department involvement). However, several organizations arenow doing this, or are in the midst of building their own FVAtracking and reporting systems. SAS users can take advantageof the sample code provided in the Appendix to create theirown reports.Simply put, the message is this: If y

Figure 3), then the forecast for all future periods is the actual from the same period in the prior year. Therefore, June 2008 through May 2009 is forecast to look exactly like June 2007 to May 2008. Figure 4 shows a 12-month moving average for the naïve forecast, which

Related Documents:

grade step 1 step 11 step 2 step 12 step 3 step 13 step 4 step 14 step 5 step 15 step 6 step 16 step 7 step 17 step 8 step 18 step 9 step 19 step 10 step 20 /muimn 17,635 18,737 19,840 20,942 22,014 22,926 23,808 24,689 325,57! 26,453 /2qsohrs steps 11-20 8.48 9.0! 9.54 10.07 10.60 11.02 11.45 11.87 12.29 12.72-

Special Rates 562-600 Station Number 564 Duty Sta Occupation 0083-00 City: FAYETTEVILL State: AR Grade Suppl Rate Step 1 Step 2 Step 3 Step 4 Step 5 Step 6 Step 7 Step 8 Step 9 Step 10 Min OPM Tab Eff Date Duty Sta Occupation 0601-13 City: FAYETTEVILL State: AR Grade Suppl Rate Step 1 Step 2 Step 3 Step 4 Step 5 Step 6 Step 7 Step 8 Step 9 Step 10 Min OPM Tab Eff Date

Grade Minimum Step 1 Step 2 Step 3 Step 4 Step 5 Step 6 Step 7 Mid-Point Step 8 Step 9 Step 10 Step 11 Step 12 Step 13 Step 14 Maximum Step 15 12/31/2022 Accounting Services Coordinator O-19 45.20 55.15 65.10 Hourly 94,016 114,712 135,408 Appx Annual 12/31/2022 Accounting Services Manager O-20 47.45 57.90 68.34 Hourly

Shake the bag so that everything mixes together (at least 1 min.) Store in a dark, dry place for 5 days Primary Growing Process Steps one Step two Step three Step four Step five Final step 11 12 Step two Step three Step five Step four Step one Step three Step 7, 8, & 9 Step four Step ten Step 3 &am

Creating a Global Forecast Series 55 Creating a Departmental Forecast Series 56 Creating a Hybrid Forecast Series 58 Setting Up Customer Adaptive Forecasting 58 About Creating a Partner Forecast Series 59 Deactivating Auto-Forecast 59 About Configuring Revenue and Forecast Spreadsheets 60 Modifying Spreadsheet Applets for Forecasting 61

Climate/Weather Linkage Forecast Uncertainty Minutes Hours Days 1 Week 2 Week Months Seasons Years NWS Seamless Suite of Forecast Products Spanning Climate and Weather Global Ensemble Forecast System Climate Forecast System* Forecast Lead Time Warnings & Alert Coordination Watches Forecasts Threats Assessments

Step 2 Step 3 Step 4 Step 5 Step 6 Step 7 Step 8 Step 9 Step 2 Step 2 Request For Quotation (RFQ) If you're a hardball negotiator at heart, this next step should bring you some real enjoyment. On the other hand, if you are not a negotiator by trade, don't worry; this step can still be simple and painless. Now that you have a baseline of what

Prosedur Akuntansi Hutang Jangka Pendek & Panjang BAGIAN PROYEK PENGEMBANGAN KUR IKULUM DIREKTORAT PENDIDIKAN MENENGAH KEJURUAN DIREKTORAT JENDERAL PENDIDIKAN DASAR DAN MENENGAH DEPARTEMEN PENDIDIKAN NASIONAL 2003 Kode Modul: AK.26.E.6,7 . BAGIAN PROYEK PENGEMBANGAN KURIKULUM DIREKTORAT PENDIDIKAN MENENGAH KEJURUAN DIREKTORAT JENDERAL PENDIDIKAN DASAR DAN MENENGAH DEPARTEMEN PENDIDIKAN .