Long-Term Load Forecasting Using System Type Neural .

2y ago
8 Views
2 Downloads
1.31 MB
7 Pages
Last View : 28d ago
Last Download : 3m ago
Upload by : Rosemary Rios
Transcription

The 14th International Conference on Intelligent System Applications to Power Systems, ISAP 2007November 4 - 8, 2007, Kaohsiung, TaiwanLong-Term Load Forecasting Using SystemType Neural Network ArchitectureNathaniel J. Hobbs, Byoung H. Kim, Kwang Y. Lee, Fellow, IEEEand the type of facilities required in transmission expansionplanning, annual hydrothermal maintenance scheduling, etc.The load is a non-stationary process which is affected bytwo main factors: time of the day and weather conditions. Thetime dependence of the load reflects the existence of a dailyload pattern, which may vary for different weekdays andseasons. Temperature is the primary weather factor affectingthe load. Humidity and wind speed are some of the otherfactors that may also influence power consumption. For themodels including weather variables, the total load may bedecomposed into the weather sensitive load and the nonweather sensitive load. The weather sensitive load is mostlypredicted using correlation techniques [1].Most forecasting methods use statistical techniques orartificial intelligence algorithms such as regression, neuralnetworks, fuzzy logic, and expert systems [1]. A variety ofmethods, which include various regression models, timeseries, neural networks, statistical learning algorithms, fuzzylogic, and expert systems, have been developed for long-termforecasting. The downside to these methods is that theyusually only predict a single value; average, peak, or totalpower, for any given month or time interval. The proposedmethod, however, provides broader feel for the forecasted loadin that it predicts every hour of every day for the next year.In general, the load has two distinct patterns: weekday andweekend patterns. Weekday patterns include Tuesday throughFriday and weekend patterns include Sunday through Monday.In addition, holiday patterns are different from non-holidaypatterns. In this paper there is no distinction made betweenholidays and non-holidays.Section II of this paper provides the general background tothe load forecasting problem. Section III describes theproposed method. Section IV applies the method to theforecasting problem, and in section V general conclusions aredrawn from the results.Abstract—This paper presents a methodology for long-termelectric power demands using a semigroup based system-typeneural network architecture. The assumption is that givenenough data, the next year’s loads can be predicted using onlycomponents from the previous few years. This methodology isapplied to recent load data, and the next year’s load data issatisfactorily forecasted. This method also provides a more indepth forecasted time interval than other methods that justpredict the average or peak power demand in the interval.Index Terms—Decomposition,network, system-type architecture.loadforecasting,neuralI. INTRODUCTIONAload forecasting is very important for electricutilities in a competitive environment created by theelectric industry deregulation. In order to supply high qualityelectric energy to the customer in a secure and economicmanner, an electric company faces many economical andtechnical problems in operation, planning, and control of anelectric energy system [1]. Load forecasting helps an electricutility to make important decisions including decisions onpurchasing and generating electric power, load switching, andinfrastructure development. Load forecasting is also importantfor energy suppliers, financial institutions, and otherparticipants in electric energy generation, transmission,distribution, and markets [2].Load forecasting can be divided into three categories:short-term forecasting which is usually from one hour to oneweek, medium-term forecasting which is from a week to amonth, and long-term forecasting which can extend to severalmonths or years. The short-term forecast is needed for controland scheduling of power system, and also as inputs to loadflow study or contingency analysis [1]. In addition, short-termload forecasting can help to estimate load flows and to makedecisions that can prevent overloading. The long-term andmedium-term forecasts are used to determine the capacity ofgeneration and transmission, distribution system additions,CCURATEII. GENERAL BACKGROUNDForecasting methods can generally be divided into twobroad categories: parametric methods and artificialintelligence based methods. The parametric methods formulatea mathematical or statistical model of load by examiningqualitative relationships between the load and the factorsaffecting the load. The assumed model parameters are thenestimated from historical data and the adequacy of the modelThis work was supported in part by the U.S. National Science Foundationunder Grant ECS-0501305.Nathaniel Hobbs and Byoung H. Kim are with the Electrical EngineeringDepartment, the Pennsylvania State University, University Park, PA 16802USA (e-mail: njh152@psu.edu; bxk232@psu.edu)K. Y. Lee is with the Electrical and Computer Engineering Department,Baylor University, Waco, Texas 76798 USA (phone: (254) 710-4195, e-mail:Kwang Y Lee@baylor.edu).454

The 14th International Conference on Intelligent System Applications to Power Systems, ISAP 2007November 4 - 8, 2007, Kaohsiung, Taiwannext section, it will be shown that the load for any year can berepresented in the following form:is verified by analysis of forecast errors. Artificial intelligencebased methods use artificial neural network as a load model.For either of these methods, to perform long-term loadforecasting, several factors should be considered, such as thetime factor, weather data, and possible customers’ classes. Thetime factors include the season of the year, the day of theweek, and the hour of the day. There are differences in loadbetween weekdays and weekends. For example, Mondays andSundays being adjacent to weekends, may have structurallydifferent loads than Tuesdays through Fridays. Obviously theelectric loads are very much dependent upon weatherconditions. The load models, however, which include weathervariables, are limited in use by problems such as inaccuracy ofweather forecasts and difficulties in modeling the weatherload relationship [12]. In this paper, the weather effects on theelectric load are not explicitly considered.Most long-term forecasting is done using artificialintelligence techniques. Many methods have previously beenproposed using artificial neural networks (ANN), fuzzy logic,or some combination of the two [6]. Neural networks havebecome increasingly popular in the past few years because oftheir abilities to model non-linear and very complex systems.Other proposed methods propose modeling the overall loadpattern by multiple linear regression models [7]. Still othershave proposed methods that decompose the systems usingwavelet decomposition with very good results [9]. However,most of these approaches only forecast the average, total, orpeak values for a given interval of time.L ( Day , Hour ) C ( Day ) E ( Hour )(1)This entire method hinges on the assumption that the basisvector set remains very similar from year to year. On accountof this, a basis set can be chosen and used for multiple years.Thus, only the coefficient vector for any other year must beknown in order to reconstruct the electric power demand.Neural networks are being used for systems described byPDE’s [8]. The system-type attribute of the neural networkarchitecture is shown in Fig. 1, implementing an arbitraryfunction L(D, H ) . Unlike conventional neural networkarchitectures that would attempt to achieve the mappingL(D, H ) with one neural network, the proposed architecturereflects a system-type approach using two neural networkchannels, a Function Channel and a Semigroup Channel, in anadaptation of the connectionist architecture (Fig. 1). Duringuse, the Semigroup Channel supplies the function channelwith a coefficient vector C (D ) as a function of the index D.The coefficient vector, when applied to the basis set E (H ) ofthe function channel, causes the function channel to operate asone specific function from within a vector space of functions.Jointly, these two channels realize a semigroup-basedimplementation of the mapping L(D, H ) .III. THE PROPOSED METHODRecently, a shift has occurred in the overall architecture ofneural networks from simple or component-type networks tosystem-type architectures. The most popular architectureseems to be the one advocated by Jacobs and Jordan [11],called the “Modular Connectionist Architecture”. The mostserious flaw in the design of system-type neural networks isthe lack of a cohesive discipline in the architectural design andin the design of the learning algorithm. Virtually, the entiredesign is done on an intuitive basis. To illustrate the lack of acohesive discipline, in [14], the partitioning of componentscorresponds to separation of variables, which works if thevariables are separated and does not work if the variables arenot separated [3]-[5].HFunction Channel(NN1)L( D, H ) C ( D )T E ( H )C ( D)DSemi-group Channel(NN2)C(0)Fig. 1: System-type architecture.The function channel can have a Radial Basis Function(RBF) architecture [11]. It consists of n RBF networks, eachone of which implements one orthonormal vector of an ndimensional basis set of vectors, E (H ) . The dimensionality,A. The System-type Neural Network MethodIn previous papers [3]-[5], a system type neural networkwas proposed which implemented extrapolation. In thismethod, the distributed parameter system (DPS) surfacedetermined by a given data set was expanded along one axis.Ratherthanthinkingoftheloadas,thisLoad f ( Day , Hour , Weather , Customer classes) ,n, is chosen as the minimum number of vectors which, whenrecombined with the coefficient vectors, will result in areconstructed load demand within a given error tolerance. Theoutputs of the orthonormal vectors are (internally) linearlysummed so that the channel spans an n-dimensional functionspace. The coefficients, which determine the linear sum andthereby define the specific function being implemented issupplied by the Semigroup Channel. Up to this point, theoperation of the RBF channel parallels the idea used by Phanapproach considers the Load f ( Day, Hour ) , parameterizedby weather and customer classes. Other parameters might bepopulation growth and special events such as Olympics. Thatis, the role of the parameters is that they determine thetransformation form one load surface to another surface. In the455

The 14th International Conference on Intelligent System Applications to Power Systems, ISAP 2007and Frueh [15].One of the essential differences between their approach andthe present proposed approach is that the former requires priorengineering knowledge for selecting the basis vectors, and thelatter approach requires no such knowledge. One advantagethat RBF networks have over other architectures is that theirfunctionality can be given an explicit mathematical expressionin which the neuron activation functions act as Green’sfunctions. This makes these networks amenable to designrather than training. Another advantage is that they function asuniversal approximators [16]. The Semigroup Channel can beadapted from the Diagonal Neural Network (DRNN) or theElman architecture [13], in which the input is split into adynamic scalar component D and one static vector component,the vector C (0) . The output is a vector, C (D ) , which isrk foLooW3.W4NW2data pointFig. 2: Overview of new training algorithm.IV. SIMULATION AND COMPARISONSince each day has its own unique load pattern, the loaddata from 2000 – 2004, provided by Korea Electric PowerCorporation (KEPCO), was separated into different days ofthe week. Wednesday was arbitrarily chosen for extrapolationand each year of Wednesday data was decomposed into aprimary basis set and coefficient set of dimensionality n,where n is set to six. Dimensionalities of four and eight werealso tried. Four basis vectors did not provide satisfactorycomputed loads, while eight vectors did not result insignificant improvement over six vectors. A common basis setwas obtained from the year 2000 and was used as the basis forthe other years as well. The data from the year 2004 was setaside for later comparison with forecasted data obtained byextrapolating the first three years. To illustrate the validity ofthe basis vectors and coefficient vectors, the empirical data forthe year 2004 is shown, Fig. 3, for comparison with thecomputed load, Fig. 4 which was obtained using the rule:the semigroup property:(2)B. Learning Algorithm of the Proposed System-type NNThe first component of the system, namely the FunctionChannel, since it is composed of RBF components, can bedesigned, rather than trained. The second component, theSemigroup Channel, can be trained in the new way illustratedin Fig. 2. During training, the Semigroup Channel receives asinput a preliminary coefficient vector C (D ) and produces a smoothed coefficient vector, C ( D) . That is, the primaryobjective of training is to replicate (and, if necessary, to smoothen) the vector C (D ) with a vector C ( D) which hasthe following semigroup property: C ( D ) Φ(D )C (0) ,(3)L ( Day , Hour ) C ( Day ) E ( Hour ) .(5)Here, Day is a sequential number for a day of the weekwithin the year, where Day 1.52 , as there are generally52 of a given day of the week in a year.where and Φ(D ) is an nxn matrix that satisfies:Φ(D1 D2 ) Φ(D1 )Φ(D2 )cergenonvecthweigW1related to the dynamic input D and to the static input C (0) byC ( D ) Φ(D )C (0) , where Φ(D1 D2 ) Φ(D1 )Φ(D2 )November 4 - 8, 2007, Kaohsiung, Taiwan(4)Empirical 2004 Load DemandHowever, there is a secondary objective of training; thechannel must also “replicate” the semigroup property of thetrajectory by gradually acquiring a semigroup property of itsown, in the weight space. The existence of this acquiredsemigroup property in the weight space becomes the basis forextrapolation [3]. In order to elicit this gradual acquisition ofthe semigroup property, it is necessary that the training in thissecond step (semigroup tracking) occur in a gradual manner,as shown in Fig. 2. In Fig. 2, the entire trajectory is splitinto successively-longer sub-trajectories. The network istrained on each of these consecutive sub-trajectories until theweights converge.4x 106MW5432602540201520Day10050Fig. 3: Empirical load data for year 2004.456Hour

The 14th International Conference on Intelligent System Applications to Power Systems, ISAP 2007C i (Year, Day ) C i2 (Year )E i2 (Day ) ,Computed 2004 Load Demand4x 105MW(6)where the superscript two indicates that this is the secondarydecomposition. Each these new three dimensional data setswas decomposed into a set of four basis vectors, E i2 (Day ) ,6and four coefficient vectors, C i2 (Year ) . By choosing theappropriate basis vectors, smooth secondary coefficientvectors were obtained, and their extrapolation becamepossible. Now, as a result of the secondary decomposition, it isnot the day which is being extrapolated. Rather, it is the yearthat is being extrapolated. That is, by extrapolating thesecondary coefficient vectors from the set, C i2 (Year ) , alongthe year axis, the primary coefficient vector, i, for the nextyear is being predicted.Training a simple recurrent network (SRN) with theproposed progressive training algorithm, a good fittingsmoothed vector was found, shown in Fig. 6, and thesemigroup channel acquired a semigroup property of its own.Thus the weights of the neural network were replaced with aweight change sequence calculated from the actual weightchanges of the neural network within the observation window.432602540201520100Day50HourFig. 4: Load computed from primary vector sets.Ordinarily, when using the proposed method, extrapolationof the coefficient vectors from the year 2003 would have beenattempted. After the coefficient vectors had been extended bythe number of Wednesdays in the year 2004, they would havebeen recombined with the primary basis set of the year 2003,and the year 2004 would have been forecasted. However, itwas found when the empirical data was decomposed that thecoefficient vectors were highly non-smooth. This non-smoothproperty of coefficient vectors, Fig. 5, made it impossible todirectly extrapolate the coefficient vectors.54November 4 - 8, 2007, Kaohsiung, TaiwanCoefficient Vector 1 for the year 2003x 10C120-2-401052x 10203040DayCoefficient Vector 1 for the year 20025060C110Fig. 6: Comparison of original and smoothed secondary coefficient vectors.-1-20102030Day4050Note that these continuous coefficient vector trajectoriesrepresent discrete data points. In order for the weight changesto converge, however, the neural network needed a continuoustraining path. Thus, the continuous curves were fit to thediscrete points. In other words, the coefficient vectortrajectories are only valid at the points 2003, 2004, and 2005on the year axis. The data used in training the networkincluded only data from the last half of the year 2002 and allthe data from the year 2004.Extrapolating the smoothed coefficient vectors from theobservation window into the test window confirmed that theweight change sequence was valid. The differences, as seen inFig. 7, between the smoothed vectors and the extrapolatedvectors are minimal. Extending the weight change sequenceinto the next region the final extrapolated vectors wereobtained and shown in Fig. 8.60Fig. 5: Sample primary coefficient vector trajectories illustrating nonsmoothness.It was seen that all the coefficient vectors from each yearwere well correlated with their counterparts in the other years(for example, see Fig. 1 for the primary coefficient vectortrajectory). Therefore, each coefficient vector trajectory fromeach year is stacked with its counterparts from the remainingyears. This created new three dimensional data sets, C i ,where i represents the set of stacked coefficient vectorstrajectories i. The new data sets can also be decomposed into asecondary basis and coefficient set. Furthermore, because ofthe correlation it is expected that the secondary coefficientvectors will be smooth. This new DPS is decomposed as:457

The 14th International Conference on Intelligent System Applications to Power Systems, ISAP 2007November 4 - 8, 2007, Kaohsiung, TaiwanFig. 9: Comparison of actual and predicted primary coefficient vectors for theyear 2004.For this paper only primary coefficient vectors one andthree were shown. They were selected, because they were themost non-smooth, and thus the hardest to predict. As can beseen from Fig. 9, the predicted coefficient vector is verysimilar to the actual coefficient vector. Since the proposedmethod of extrapolation was successful for both these, it isexpected that it will also be successful for the rest of theprimary coefficient vectors.Forcasted 2004 Load Demandx 10465MWFig. 7: Extrapolation tests for secondary coefficient vectors from primarycoefficient vector set C1 .432602540201520100Day50HourFig. 10: Forecasted load for year 2004.The forecasted load shown, Fig. 10, is very similar inshape to the empirical load demand in Fig. 3. Though thereappears to be significant error around day 40. Fig. 11 shows agraphical representation of the data in Table 1.Relative Percent ErrorFig. 8: Extrapolation for secondary coefficient vectors from primarycoefficient vector set C1 .40This extrapolation testing and final extrapolation was thenperformed for each coefficient vector set within the secondarydecomposition. The extended secondary coefficient vectors,when recombined with their respective basis sets, result in a%3020104x 105Comparison of Original & Forecasted Coefficient 0x 10102030Day4060The maximum relative error was 31.32%, and the averagewas 6.21%. Note that in the original load, Fig. 3, there is asignificant negative peak on days 36 and 37. These are likelyholidays and certainly deviations from the normal Wednesdayload pattern, which results in a high error. Though the error isrelatively high compared to that of short term forecasting,which is generally below 2%, it must be remembered that thepurpose of long-term forecasting is not to precisely predict theload at any given time, but rather to predict the general trendForecastedOriginal00HourFig. 11: Relative percent error.1-15052C350100102030Day405060predicted coefficient vector for the year 2004, Fig. 9.458

The 14th International Conference on Intelligent System Applications to Power Systems, ISAP 2007in the demands over an interval of time. The proposed methodhas several advantages over other long-term forecastingtechniques. First, by using the proposed method, the loadpattern can be seen for many different times of day, seasons ofthe year, etc. This is a distinct advantage over methods whichonly predict the average, peak, or total demand for a month.Secondly, the proposed method provides the viewer with amore intuitive grasp of the overall load pattern.To summarize the procedure, data from any day of theweek is algebraically decomposed into a basis set and acoefficient set. Since the n vectors comprising the coefficientset are most likely non-smooth, n new three dimensionalfunctions are built by stacking the ith coefficient vector fromeach year. These in turn are decomposed in an attempt to finda smooth secondary coefficient set. If they are found, thevectors are extrapolated using the neural network system topredict the next year’s coefficient vector. Once all ncoefficient vectors have been predicted, they are combinedwith the primary basis set and the forecasted year is obtained.[6][7][8][9][10][11][12]V. CONCLUSIONS[13]In this paper, a methodology was proposed to performlong-term electric power demand forecasting. This methodwas then applied to an empirical data set, and the year 2004was forecasted by predicting its primary coefficient vectors. Itwas shown that the proposed method achieved satisfactoryresults although there was no attempt to separate holidaysfrom non-holidays. If the holidays were separated out, it isexpected that the error would be significantly lower. Thismethod also provides a more in depth forecasted time interval,rather than just predicting the average or peak power demandin the interval.This paper also addressed the problem of extrapolatinghighly non-smooth coefficient vectors, by a secondarydecomposition.This method, though applied to Wednesday in this paper,can be applied to any other day of the week includingweekends. Moreover, it is expected that similar results can beobtained for any day.[14][15][16]November 4 - 8, 2007, Kaohsiung, TaiwanProc. 2005 International Conference on Intelligent Systems Applicationto Power Systems, pp. 291-296M.A Farahat, “Long-term industrial load forecasting and planning usingneural networks technique and fuzzy inference method,” 39thInternational Universities Power Engineering Conference, 2004. Vol. 1,pp.368 - 372H.M. Al-Hamadi and S.A. Soliman, “Long-term/mid-term electric loadforecasting based on short-term correlation and annual growth,” inElectric Power Systems Research Vol. 74, Issue 3 , June 2005, Pages353-361T.Q.D. Khoa, L.M. Phuong, P.T.T. Binh, and N.T.H. Lien, “Applicationof wavelet and neural network to long-term load forecasting,” in PowerSystem Technology Conference 2004, Vol. 1, pp. 840 - 844T.Q.D. Khoa, L.M. Phuong, P.T.T. Binh, and N.T.H. Lien, “Applicationof wavelet and neural network to long-term load forecasting,” in 2004International Conference on Power System Technology, Vol. 1, pp.:840- 844Q. Ahsan and M. Uddin, “A Probabilistic Approach of Electrical EnergyForecasting,” in IEEE Proc. 2005 Instrumentation and MeasurementTechnology Conference, Vol. 2, pp.1070 – 1074R. Jacobs and M. Jordan, “A competitive modular connectionistarchitecture,” Advances in Neural Information Processing Systems, Vol.3, p. 767-773, 1991.J. H. Park, Y. M. Park, and K. Y. Lee, “Composite modeling foradaptive short-term load forecasting,” IEEE Trans. on Power Systems,vol. 6, pp. 450-457, May. 1991.B. H. Kim, J. P. Velas, and Kwang Y. Lee, “Short-Term LoadForecasting Using System-Type Neural Network Architecture”, inInternational Joint Conference on Neural Networks 2006, pp. 2619 2626A. Atiya, R. Aiyad, and S. Shaheen, “A Practical Gated Expert SystemNeural Network,” IEEE International Joint Conference on NeuralNetworks, Vol. 1, pp. 419-424, 1998.M. Q. Phan and J. A. Frueh, “Learning Control for Trajectory Trackingusing Basis Functions,” Proceedings of the 35th IEEE Conference onDecision and Control, pp. 2490-2492, Dec. 1996.S. Haykin, Neural Networks, 2nd ed., Prentice Hall: N.J., 1999.VIII. BIOGRAPHIESNathaniel J. Hobbs received his B.S. degree inElectrical Engineering from the University ofMissouri-Rolla in 2006. He is currently enrolled inthe Ph.D program in Electrical Engineering at thePennsylvania State University, University Park, Pa.His research interests are in neural networks,intelligent and classical control, and industrialautomation.Byoung-Hee Kim received his B.S. and M.S degreesin Control and Instrumentation Engineering from University of Ulsan in 1998and 2000, respectively. He received his Ph.D degreein Electrical Engineering in 2007 from thePennsylvania State University, University Park, Pa.His research interests are in neural network,intelligent control, power plant control, and networkbased control systems.VI. ACKNOWLEDGMENTThe authors thank the Korea Electric Power Corporation(KEPCO) for providing the load data.VII. REFERENCES[1][2][3][4][5]Kwang Y. Lee received his B.S. degree in ElectricalEngineering from Seoul National University, Korea,in 1964, M.S. degree in Electrical Engineering fromNorth Dakota State University, Fargo, in 1968, andPh.D. degree in System Science from Michigan StateUniversity, East Lansing, in 1971. He has been withMichigan State, Oregon State, Univ. of Houston,Penn State, and Baylor University, where he is now aProfessor and Chair of Electrical and ComputerEngineering. His interests include power systemcontrol, operation, planning, and intelligent systemapplications to power systems. Dr. Lee is a Fellow ofIEEE, Associate Editor of IEEE Transactions on Neural Networks, and Editorof IEEE Transactions on Energy Conversion. He is also a registeredProfessional Engineer.K. Y. Lee, Y. T. Cha, and J. H. Park, “Short-term load forecasting usingan artificial neural network,” IEEE Trans. on Power Systems, vol. 7, pp.124-132, Feb. 1992.J. H. Chow, Applied Mathematics for Restructured Electric PowerSystems: Optimization, Control, and Computational Intelligence. NewYork: Spinger-Verlag, 2005, ch. 12.K. Y. Lee, J. P. Velas, and B. H. Kim, “Development of an IntelligentMonitoring System with High Temperature Distributed FiberopticSensor for Fossil-Fuel Power Plants,” IEEE Power Engineering SocietyGeneral Meeting, pp. 1350-1355, Jun 6-10, 2004.B. H. Kim, J. P. Velas, and K. Y. Lee, “Development of intelligentmonitoring system for fossil-fuel power plants using system-type neuralnetworks and semigroup theory,” IEEE Power Engineering SocietyGeneral Meeting, pp.2949-2954, 2005.Byoung-Hee Kim, Velas, J.P., and Lee, K.Y., “Semigroup based neuralnetwork architecture for extrapolation of enthalpy in a power plant,” in459

The 14th International Conference on Intelligent System Applications to Power Systems, ISAP 2007November 4 - 8, 2007, Kaohsiung, TaiwanTABLE I: RELATIVE PERCENT ERROR OF FORECASTED 5.35.35.55.0361716161

Neural networks are being used for systems described by PDE’s [8]. The system-type attribute of the neural network architecture is shown in Fig. 1, implementing an arbitrary function (),L D H. Unlike conventional neural network architectures that would attempt to achieve the mapping (),L D H with one

Related Documents:

H 1. Typically, the smart grid electric load data are obtained via smart meters or PMU (power measurement unit). If the smart meters have a sampling interval of 15 minutes, the forecasting horizon is H 96 for 24-hour (one-day) ahead load forecasting. PMU has even higher sampling frequency with sampling interval in sub-second, and the .

Forecasting with R Nikolaos Kourentzesa,c, Fotios Petropoulosb,c aLancaster Centre for Forecasting, LUMS, Lancaster University, UK bCardi Business School, Cardi University, UK cForecasting Society, www.forsoc.net This document is supplementary material for the \Forecasting with R" workshop delivered at the International Symposium on Forecasting 2016 (ISF2016).

Importance of Forecasting Make informed business decisions Develop data-driven strategies Create proactive, not reactive, decision making 5 6. 4/28/2021 4 HR & Forecasting “Putting Forecasting in Focus” –SHRM article by Carolyn Hirschman Forecasting Strategic W

Introduction to Forecasting 1.1 Introduction What would happen if we could know more about the future? Forecasting is very important for: Business. Forecasting sales, prices, inventories, new entries. Finance. Forecasting financial risk, volatility forecasts. Stock prices? Economics. Unemplo

Although forecasting is a key business function, many organizations do not have a dedicated forecasting staff, or they may only have a small team. Therefore, a large degree of automation may be required to complete the forecasting process in the time available during each forecasting and planning cycle.

ects in business forecasting. Now they have joined forces to write a new textbook: Principles of Business Forecasting (PoBF; Ord & Fildes, 2013), a 506-page tome full of forecasting wisdom. Coverage and Sequencing PoBF follows a commonsense order, starting out with chapters on the why, how, and basic tools of forecasting.

Undoubtedly, this research will enrich greatly the study on forecasting techniques for apparel sales and it is helpful to identify and select benchmark forecasting techniques for different data patterns. 2. Methodology for forecasting performance comparison This research will investigate the performances of different types of forecasting techniques

ferent behaviours in load forecasting. To our knowledge, it is the first attempt to introduce learning method to explore customers' power consumption behaviours for load forecast-ing. Experiment evaluations demonstrate the advantages of utilizing learned customer behaviours in load forecasting for a Smart Grid. 2) In our method, the proposed .