MetaLight: Value-based Meta-reinforcement Learning For Traffic Signal .

1y ago
11 Views
2 Downloads
594.35 KB
8 Pages
Last View : 2m ago
Last Download : 2m ago
Upload by : Pierre Damon
Transcription

MetaLight: Value-based Meta-reinforcement Learning for Traffic Signal ControlXinshi Zang1 , Huaxiu Yao2 , Guanjie Zheng2 , Nan Xu1 , Kai Xu3 , Zhenhui Li211Shanghai Jiao Tong University, 2 Pennsylvania State University, 3 Shanghai Tianrang Intelligent Technology Co., Ltdzang-xs@foxmail.com, 2 {huaxiuyao, gjz5038, jessieli}@ist.psu.edu, 1 xunannancy@sjtu.edu.cn, 3 kai.xu@tianrang-inc.comAbstractUsing reinforcement learning for traffic signal control has attracted increasing interests recently. Various value-based reinforcement learning methods have been proposed to dealwith this classical transportation problem and achieved betterperformances compared with traditional transportation methods. However, current reinforcement learning models rely ontremendous training data and computational resources, whichmay have bad consequences (e.g., traffic jams or accidents) inthe real world. In traffic signal control, some algorithms havebeen proposed to empower quick learning from scratch, butlittle attention is paid to learning by transferring and reusinglearned experience. In this paper, we propose a novel framework, named as MetaLight, to speed up the learning process in new scenarios by leveraging the knowledge learnedfrom existing scenarios. MetaLight is a value-based metareinforcement learning workflow based on the representativegradient-based meta-learning algorithm (MAML), which includes periodically alternate individual-level adaptation andglobal-level adaptation. Moreover, MetaLight improves thestate-of-the-art reinforcement learning model FRAP in trafficsignal control by optimizing its model structure and updatingparadigm. The experiments on four real-world datasets showthat our proposed MetaLight not only adapts more quicklyand stably in new traffic scenarios, but also achieves betterperformance.1IntroductionInefficient traffic signal plans waste people’s time on roads.Current traffic signal control systems are not optimized according to the dynamic traffic data. For example, widelyadapted traffic control systems, such as SCATS (Lowrie1992), rely on manually designed traffic signal plans. Withthe development of AI technology and the growth of available traffic data (e.g., surveillance camera data), recent studies apply deep reinforcement learning (DRL) on traffic signal control problems (Wei et al. 2018; Zheng et al. 2019a;Van der Pol and Oliehoek 2016). DRL methods can learnand adjust traffic signal policies based on the feedback fromthe environment and have shown better performance thantraditional transportation methods.Copyright c 2020, Association for the Advancement of ArtificialIntelligence (www.aaai.org). All rights reserved.The training mechanism of DRL follows a trial-and-errormanner and thus the superior performance is conditioned ona large number of training episodes. The cost of computational resources and learning time is unacceptable in realworld traffic signal control. For example, if the traffic condition is complicated, traditional DRL models need long timeto generate enough samples and to have models well-trained.Even worse, some successive bad trials may result in severetraffic congestion, which may break down the transportationsystem. Thus, the agent for traffic signal control should beable to learn quickly with a few samples.Recently, meta-reinforcement learning has been widelystudied to improve the efficiency of deep reinforcementlearning by transferring previous learned knowledge and integrating this knowledge with the new information. Thereare mainly two lines of meta-reinforcement learning algorithms: (1) recurrent-based meta-reinforcement learning (Duan et al. 2016; Mishra et al. 2018). In this case,the parameters of the prediction model are controlled bya learnable recurrent meta-optimizer and its correspondinghidden state. (2) Gradient-based meta-reinforcement learning (Finn, Abbeel, and Levine 2017; Nagabandi, Finn, andLevine 2019; Nagabandi et al. 2019). These methods learna well-generalized initialization that can be quickly adaptedto a new scenario with a few gradient steps. However, simply applying either gradient-based or recurrent-based metareinforcement learning methods on traffic signal controlfaces two key challenges: How to learn and adapt to the complicated and heterogeneous scenarios in traffic signal control? Compared with previous meta-reinforcement learning applications that mainly focus on homogeneous tasks, the scenarios of traffic signal control are more complicated andheterogeneous. For example, the number of signal phasesin different intersections varies from two to eight and oneintersection may contain different numbers of lanes androads. Since the DRL models in different scenarios aredifferent, a sufficiently flexible meta-reinforcement learning model is required to handle various scenarios. How to apply meta-learning on value-based reinforcement learning? The action space for the traffic signal agent is discrete and small. For example, according

to (Wei et al. 2019c), the number of signal phases is usually no more than eight. With the small action space,value-based DRL is more suitable and it is more frequently used in current DRL-based traffic signal control (Wei et al. 2018), which trains the model in anoff-policy fashion. However, current meta-reinforcementlearning mainly focuses on policy-based DRL, where theon-policy data is used.To address these challenges, we propose a novel metareinforcement learning framework for traffic signal control,MetaLight, which is built upon the gradient-based metareinforcement learning line. To the best of our knowledge,it is the first work to introduce meta-reinforcement learningparadigm into DRL-based traffic signal control. In MetaLight, we first improve a structure-agnostic DQN-based traffic signal control model called FRAP (Zheng et al. 2019a),which enables heterogeneous scenarios sharing the same parameters. Then, based on the meta-reinforcement learningparadigm, we learn a well-generalized initialization fromvarious traffic signal control tasks. Given a new traffic scenario with a limited learning period, the learned initializationcan be quickly adapted with a few generated samples. Toaddress the second challenge, we further propose two typesof adaptation mechanisms: individual-level adaptation andglobal-level adaptation. The former is a step-by-step optimization process on each task and the latter is a periodicsynchronous updating process on a batch of sampled tasks.Each task inherits a globally-shared initialization of parameters, then performs individual-level adaptation and finallycontributes to global-level adaptation.We conduct extensive experiments to evaluate MetaLighton four real-world datasets. The results show that our proposed MetaLight enhances the learning efficiency and outperforms state-of-the-art baselines in traffic signal control.In summary, this paper has the following key contributions: To improve the efficiency of traffic signal control, we arethe first to apply value-based meta-reinforcement learningfor traffic signal control. We propose MetaLight, a novel value-based metareinforcement learning framework by combiningindividual-level adaptation and global-level adaptation. Empirically, we demonstrate the effectiveness and efficiency of our proposed model on four real-world datasets.2Related WorkMeta-reinforcement learning. Meta reinforcement learning aims to solve a new reinforcement learning task by leveraging the experience learned from a set of similar tasks.Currently, meta-reinforcement learning can be categorizedinto two different groups. The first group approaches (Duanet al. 2016; Wang et al. 2016; Mishra et al. 2018) use anexternal memory to store previous learned knowledge andfurther reuse these knowledge in a future task. For example, (Wang et al. 2016) trains a recurrent neural network byusing the training data as input and then output the parameters of a leaner model. These approaches can achieve relatively good performances, but they may lack computationalefficiency (Finn and Levine 2017).In contrast, the second type of approaches (Li and Malik 2016; Finn, Abbeel, and Levine 2017; Nagabandi et al.2019; Andrychowicz et al. 2016; Yao et al. 2019) aim tolearn an optimal parameter initialization or optimizer. Representatively, model-agnostic meta-learning (MAML) (Finn,Abbeel, and Levine 2017) optimizes the initial parameters ofthe base learner in meta-training process, which significantlyimproves the efficiency of reinforcement learning on the newtask. However, most gradient-based reinforcement learningalgorithms are mainly focusing on policy-based reinforcement learning. How to combine MAML with value-basedreinforcement learning is rarely studied.Reinforcement learning for Traffic signal control. RLbased traffic signal control has attracted widely attentionfrom both academia and industry in the last two decades.Traditional RL methods (Balaji, German, and Srinivasan2010; Abdulhai, Pringle, and Karakoulas 2003) are limited to tabular Q-learning and a discrete state representation.However, with the development of RL methods, researchershave studied different RL methods in traffic signal control.In terms of algorithms, current studies can be categorizedinto value based methods (e.g., deep Q-Network (Van derPol and Oliehoek 2016; Wei et al. 2019a; 2019b; Zheng etal. 2019b)) and policy-based methods (Aslani, Mesgari, andWiering 2017; Xiong et al. 2019).In addition to the different method category, researchershave also been exploring different design of the networkand features. Early studies (Abdoos, Mozayani, and Bazzan 2011) use numerical features to describe traffic scenario,e.g., queue length of each lane. These features are fed intoa multi-layer perceptron to predict the action (e.g., signalto set). Recently, researchers (Gao et al. 2017; Van der Poland Oliehoek 2016) convert traffic situation features (e.g.,positions of vehicles) into image, and apply convolutionalneural networks (CNN) learn their representations. For instance, (Gao et al. 2017) successfully achieves nearly 50%improvements compared with transportation methods. Recently, (Wei et al. 2018) proposes a dual-branch networkstructure to effectively approximate value function. Afterthat, (Zheng et al. 2019b) proposes a plain fully-connectedneural net with concise state features and properly designedreward function, which outperforms all the state-of-the-artbaseline methods.However, one common problem of the aforementionedmethods is the lack of a universal network design for different intersection scenarios, which means that we need totrain different networks for different scenarios from scratch.(Zheng et al. 2019a) recently proposed a novel network design, called FRAP, based on the principle of phase competition, making it possible to apply universally to differentintersections with the same set of network parameters.In this paper, we make further modification based onFRAP to make it apply to more universal scenarios, including different lane and intersection settings. Additionally, wecombine the improved FRAP and the extended MAMLparadigm in MetaLight to transfer the knowledge trainedfrom different scenarios and enable quick adaptation to newscenarios.

3Problem StatementIn this section, we first define several basic concepts and thenformally define the meta-reinforcement learning problem fortraffic signal control.3.1PreliminaryIn this paper, we investigate traffic signal control in a singleintersection with different scenarios. In most cases, the scenario of an intersection is determined by three concepts: traffic flow, entering approach or lane, and phase setting, whichare explained as H eright-lanethrough-laneSOUTH (S)(a)GH(b)Figure 1: Intersection structure and traffic signal phase.(a) shows a standard intersection with four entering approaches (E/N/W/S), each of which has three types oflanes (right/through/left). (b) enumerates eight typical signal phases. Traffic flow: Both the pattern and volume of traffic floware significantly different between intersections. In traditional DRL model for traffic light control, traffic flow isused as features, which does not change the state/actionspace (Wei et al. 2019c). Therefore, intersections only differing in traffic flows are regarded as homogeneous scenarios in this paper. Entering approach/lane: For each intersection, the entering approach is represented as the direction whichvehicles enter in. In real world, most intersections areequipped with four entering approaches but some havethree or even five. Figure 1 illustrates a standard 4approach intersection. Each entering approach has threetypes of lanes, e.g., left-lane, through-lane and right-lane.According to (Wei et al. 2019c), many features in the statefor RL methods are measured in unit of lanes, such asqueue length per lane, the number of entering approachesand lanes determine the dimension of state space. Thus,intersections with different number of entering intersections and lanes are regarded as heterogeneous scenarios. Phase Setting: As illustrated in Figure 1, there are theoretically eight signal phase in total and each phase controls two traffic movements which do not conflict witheach other. Each intersection has its own phase settingsbased on the traffic characteristics. Since the dimensionof action space for RL agent is directly correlated withthe number of phases (Wei et al. 2019c), we also define intersections with different phase settings as heterogeneousscenarios.3.2Problem: Meta-reinforcement Learning forTraffic Signal ControlFollowing the traditional task definition of metareinforcement learning (Finn, Abbeel, and Levine 2017), intraffic signal control, we are given a set of Nt intersectionsIS {I1 , . . . , INt } sampled over task distribution E.The control process in each intersection Ii is representedas a Markov decision process hSi , Ai , Ri , γi , Hi i, whichcontains a finite set of states Si , a finite set of actionsAi , a reward function Ri , a discounted factor γi , and theepisode length Hi . The reward Ri (s, a) in step t is definedas Ri (s, a) E [Rt 1 Si (t) s, Ai (t) a]. For eachintersection Ii , given an episode length Hi , the goal is tolearn an optimal control policy πi (a s). In addition, forintersection Ii , the value function is defined as the sumof reward rt discounted by γi at each timestep t, which isformulated asQ(s, a; fθ ) E [ri (t) γi ri (t 1) . . . si (t) s, ai (t) a] .(1)Then, we defined the base learner f with learnable parameterθ to map observations Si to outputs Ai . The effectiveness offunction f with optimal parameters θi is defined asL(fθi ) Es,a,r,s0 Di 2 0 0 r γmaxQs,a;f Q(s,a;f),θiθ0ai(2)where θi are the parameters of target network in FRAP thatare fixed for every C iterations (Mnih et al. 2015).In meta-reinforcement learning, we are supposed tolearn a well-generalized meta-learner M(·) to enhancethe learning efficiency of future traffic signal controltasks. In general, the whole procedure of meta-learningcan be split as two steps: meta-training and meta-testing.During meta-training, the parameters of base learner f(i.e., {θ1 , . . . , θNt }) and the well-generalized meta-learnerM(·) are updated alternatively. First, the parameters{θ1 , . . . , θNt } are learned by using transitions Di sampledfrom each intersection Ii . The goal is to minimize the lossover all meta-training, which is defined as:{θ1 , . . . , θNt } : minNtX{θ1 ,.,θNt }L(M(fθi ); Di ).(3)i 1Then, the meta-learner M is optimized by sampling another0batch of transitions Di :M : minMNtX0L(M(fθi ); Di ).(4)i 1After learning a well-generalized meta-learner, during metatesting, for a new traffic intersection It , the model f isadapted by using transitions Dt sampled from it.

Then, we introduce model-agnostic meta-learning(MAML), one of the representative gradient-based metareinforcement learning algorithms (Finn, Abbeel, andLevine 2017). In MAML, the meta-learner M is regardedas well-generalized initialization θ0 of parameters in baselearner f . With a few gradient descent steps, we canget the optimal parameters θi . Thus, the meta-learnerM is regarded as (one gradient step as exemplary)M(fθi ) fθ0 α θ L(fθ ,Di ) . In meta-training process, thewhole loss of MAML is:0Lall L(fθ0 α θ L(fθ ;Di ) ; Di ).4(5)The MetaLight FrameworkIn this section, we first briefly introduce the structureagnostic and parameter-sharing RL model calledFRAP (Zheng et al. 2019a) and propose a improvedmodel FRAP . Then, we will elaborate the entire parameter learning procedure of our proposed MetaLight, includingindividual-level adaptation and global-level adaptation.4.1Structure-agnostic and Parameter-sharingRL ModelFigure 2: The Illustration of FRAP and FRAP . FRAP usesthe sum of lanes’ representation to represent phase whileFRAP uses the mean of them. Yellow multi-layer perceptrons (MLPs) are shared by each phase.In traffic signal control, a flexible base model f is requiredto handle the scenario across heterogeneous intersectionswhich are described in Sec. 3.1. Figure 2 illustrates structures of FRAP and FRAP in 3-phase intersections. Thenetwork consists of several embedding layers and convolutional layers. The former parameters are shared across lanes,which means the number and type of approaching lanes onlyaffect the network structure rather than the parameters ofembedding layers. Furthermore, FRAP uses fixed number of1 1 filters in convolutional layers, they are also independentof the number and type of phase. In summary, the structureof FRAP depends on the number of lanes and phases in theintersection but the network parameters are sharing in different intersections.To improve the flexibility of FRAP on different lanescombination, we propose a improved model FRAP ,which enhance FRAP from two folds: (1) The FRAP represents the phase demand by averaging each lane’s demandinstead of adding this demand in order to remove the influence of difference in the lane number under each phase andmake FRAP widely applicable. (2) FRAP updates parameters only after each whole episode, which violates DQNone-step updating mechanism. Instead, FRAP improvesthe updating frequency by undertaking a mini-batch updating after each step in one episode.Similar with (Zheng et al. 2019a), the state of FRAP consists of the number of vehicles and signal phase oneach approaching lane. The action for RL agent is definedas choosing the phase for the next time interval. The reward is defined as the average queue length on approaching lanes. Therefore, FRAP is a structure-agnostic modelwith shared parameters between different scenarios, whichperfectly fits the property of base learner f defined in Sec. 3.4.2MetaLight FrameworkNext, we introduce our MetaLight framework, which reuseprevious learned knowledge to facilitate the learning process in target intersection. MetaLight follows the traditional gradient-based meta-reinforcement learning framework, MAML, which is described in Sec. 3. However, traditional design of MAML mainly focuses on policy-basedDRL problems. Empirically, on value-based DRL modelslike FRAP , MAML only slightly outperforms random initialization, which does not meet our expectation and cannot be deployed to large-scale real-world scenarios (see experiments in Section 5 for more details). Thus, we improveMAML by alternatively utilizing individual-level adaptationand global-level adaptation. Specifically, MetaLight takesadvantage of fast learning in DQN by updating parametersat each time-step and extracting the common knowledge inMAML by gradient descent. The framework of MetaLightis illustrated in Figure 3 and we detail these two adaptationsteps in the follows:Individual-level Adaptation As described in (Mnih etal. 2015), DQN uses a neural network to represent theaction-state function, Q(s, a), in Equation (1). In trafficsignal control, FRAP follows the standard design ofDQN with experience replay and target value network.In each intersection Ii , the agent’s experiences ei (t) (si (t), ai (t), ri (t), si (t 1)) at each timestep t are storedin set Di .As shown in Figure 3, in individual-level adaptation, theparameters θi of each task Tis are updated at each timestepby gradient descent, which is formulated as (one gradientstep as exemplary):θi θi α θ L(fθ ; Di ),(6)where α represents the step size and the loss function L isdefined in Eqn. (2). In value-based reinforcement learning,individual-level adaptation is taken at each timestep to speedup the learning process on source intersections.Global-level Adaptation After the adaptation inindividual-level, global-level adaptation aims to aggregate the adaptation of each intersection Ii , and then updatethe initialization θ0 of meta-learner using a newly sampled

BatchSampling𝐼s1𝐼0𝐼1𝐼4𝐼5𝐼2 𝐼3𝐼𝑛Time StepSamples forMeta Learnerround 1𝐼0𝑡𝐼2𝐼3𝐼2𝐼4𝑡T𝑡 𝑡𝜃 1 𝑡 𝑡𝜃Meta-Training𝑡 1𝑡 2𝑡 3′𝐷𝑡 1′𝐷𝑡 2′𝐷𝑡 3 𝑡 𝑡θ -2𝑡 𝑡θ -1𝑡 𝑡θ′𝐷𝑡 𝑡𝜃 2′𝐷𝑡 𝑡𝜃 1′𝐷𝑡 𝑡𝜃Minibatch samples for metalearner𝐼0𝐷𝑡′ round 2𝐼1 Learnersround �𝐷𝑡 1𝜃𝑖𝑡 1𝐷𝑡 2𝜃𝑖𝑡 2𝐷𝑡 3 𝜃𝑖𝑡 3𝜃𝑖Individual-level Adaptation𝜃𝑡𝐼𝑛Source intersections (tasks)Append samples from differentsource intersections to memory Memory 𝐷𝑡 𝑡𝜃 2𝑡 𝑡𝜃 2Transmit samples to metalearner𝐷𝑡 𝑡𝜃𝑡 𝑡𝜃 1𝜃𝑖𝑡 𝑡𝜃𝜃𝑖 Initialize parameters in learnerswith θ from meta-learner𝜃 𝑡 1Global-level AdaptationMinibatch samples for learnersFigure 3: Meta-training framework of MetaLight. From left to right, a batch of tasks are first sampled. Then, in meta-training,the whole episode with a length of T is split by tθ . During each interval tθ , the base learner inherits the initialization frommeta-learner and then conduct individual-level adaptation using samples drawn from memory at each time step. At the end ofeach interval tθ , the meta-learner takes global-level adaptation with another batch of samples from the memory.Algorithm 1: Meta-training process of MetaLightInput: Set of source intersections IS ; stepsizes α, βfrequency of updating meta parameters tθOutput: Optimized parameters initialization θ01 Randomly initialize parameters θ02 for round 1, . . . , N do3Sample a batch of intersections from E4for t 1, tθ 1, 2tθ 1, . . . , T do05for t t, . . . , min(t tθ , T ) do6for each intersection Ii do7θi θ08Generate transitions into D andsample transitions as Di9Update θi θi α θ L(fθ ; Di ) byEqn. (6)Algorithm 2: Meta-testing process of MetaLightInput: Set of target intersections IT ; stepsizes αlearned initialization θ0Output: Optimized parameters θt for eachintersection It1 for each intersection It in IT do2θt θ03for t 1, . . . , T do4Generate and sample transitions as Dt5Update θt θt α θ L(fθ ; Dt ) by Eqn. (8)55.101011Sample new transitions from D as DiP0Update θ0 θ0 β θ Ii L(fθ ; Di ) byEqn. (7)0transitions Di . The initialization θ0 is updated as follows:X0θ0 θ0 β θL(fθ ; Di ),(7)Iiwhere β is defined as stepsize. The whole algorithm formeta-training process of MetaLight is described in Alg. 1.Transfer Knowledge to New Intersections In the metatraining process of MetaLight, we learn a well-generalizedinitialization of parameters in f . Then, we apply the initialization θ0 to a new target intersection It . By using θ0 asinitialization, the update process in the intersection It is defined as:θt θt α θ L(fθ ; Dt ).(8)Then we evaluate the performance by using the optimal parameters θt . The meta-testing process is outlined in Alg. 2.ExperimentExperiment SettingsWe conduct experiments1 in a simulation platform calledCityFlow (Zhang et al. 2019) 2 , which provides the latestsimulation environments for traffic signal control. The trafficdata is first fed into the simulator and vehicles move to theirdestination according to the setting of the environment. Thesimulator executes the traffic signal actions from the controlmethod and returns the state to the signal control method.5.2DatasetsWe use four real-world datasets from two cities in China: Jinan (JN) and Hangzhou (HZ), and two cities in the UnitedStates: Atlanta (AT), and Los Angeles (LA). The raw traffic data from two Chinese cities contains the informationabout the vehicles coming through the intersections, whichare captured by the nearby surveillance cameras. The otherraw data from American cities is composed of the full vehicle trajectories which are collected by several video camerasalong the streets3 . Based on these raw data, we run the traffic flow for one hour and the entering lanes only consist ofleft-lane and through-lane.1Codes are provided at /trafficanalysistools/ngsim.htm2

Because of the limited kinds of phase setting in the rawdata, we add some new phase settings in order to buildenough heterogeneous scenarios. There are eleven kinds ofphase settings in total, including four kinds of 4-phase, sixkinds of 6-phase, and one 8-phase. They are divided into twogroups named as PS1 and PS2 respectively. As described inFigure 4, PS1, colored red, contains six kinds of phase settings and PS2 colored blue consists of the other five phasesettings.PhaseSetting4aAC 4b4cB 4d6a 6b 6c 6d6e 6f 8 DE G 5.4In MetaLight, the base model, FRAP shares the similar network structure with FRAP (Zheng et al. 2019a), except for the average operation in the embedding layers. Thelearning rates of learner and meta-learner are set as 0.001for MetaLight and MAML in both meta-training and metatesting. The episode length for all scenarios is 3600 secondsand the interval of each interaction between simulator andRL agent is 10 seconds. For MetaLight, the learner conductsmodel updating after each interaction using 30 samples andonly one epoch for training. Meta-learner updates itself atintervals of ten times of learners’ updating. For MAML , thelearner first undertakes one centralized updating at the endof each episode with 1000 samples and 100 epochs for training. Then, the meta-learner updates itself using new episodeseach time. As summarized in Table 1, we construct 24 scenarios inHangzhou as training set. The phase setting of each scenariois drawn from PS1. The testing set is classified into threetypes and introduced as follows: Task-1 is a set of homogeneous tasks in which testing sets are similar with training sets except traffic flow. Task-2 represents heterogeneoustasks which means testing datasets are different from training datasets in both traffic flow and phase setting. Task-3consists of both homogeneous and heterogeneous tasks fromdifferent cities (Jinan, Atlanta, and Los Angeles).Table 1: Summary of datasets5.3Training SetsModel Details and Hyperparameter Settings Figure 4: Eleven phase settings in experiments are composedof different phases from A to H. Red represents PS1 and bluedenotes PS2.Datasets MAML (Finn, Abbeel, and Levine 2017): In MAML,we combine the original framework of MAML reinforcement learning and FRAP. The original FRAP is greatlymatched with MAML framework for policy-based reinforcement learning, because it also conducts model updating at the end of a whole episode. SOTL (Cools, Gershenson, and Hooghe 2013) SelfOrganizing Traffic Light Control (SOTL) provides reference value for comparison, which is a classical transportation method. SOTL sets a pre-defined threshold forthe number of waiting vehicles on approaching lanes andchanges signal phases when the threshold is exceeded.H Fis chosen. In heterogeneous setting, since there are no existing intersections with the same phase setting, the modeltrained at 8-phase setting will be used for initialization.Task-1Testing APhase SettingsPS1PS1PS2PS1/PS2Methods for ComparisonTo evaluate the effectiveness and efficiency of our MetaLight, we compare it with several representative methodsdescribed as follows. All baselines use FRAP as the basemodel. Random : Random uses random initialization and trainFRAP model from scratch. Pretrained : Pretrained means selecting one existingFRAP model’s parameters as the initial parameter for anew intersection. The similarity of different intersectionsdetermines which model to be chosen. When in homogeneous setting, the model trained at the same phase setting5.5Evaluation MetricsWe choose travel time as the evaluation metric, which isalso the most frequently used measure to judge performancein the transportation field. This metric is defined as the average travel time that vehicles spend on approaching lanes (inseconds).5.6Task-1: Homogeneous ScenariosIn Task-1, we choose six homogeneous scenarios whosephase settings all come from PS1 and exist in the trainingset. The results of all methods are described in Table 2.Each phase setting stands for one scenario. Note that, the improvement is calculated by comparing with the best baseline.We can observe that either Pretrained or MAML is the bestbaseline but MetaLight outperforms them in most scenariosexcept for the 4b phase setting. The averaged improvementover these phase settings is 5.52%, which is not significantenough. The possible reason is that the effect of overfittingproblem is not severe in homogeneous setting and simplyutilizing existing models can work well. Even so, MetaLight is much better since it is able to apply only one initialmodel to all of these scenarios, while the Pretrained methodneed select suitable model each time.

Table 3: Overall performances of Task-2. Each result is theaverage travel time of all scenarios. The averaged improvement over all phase settings is 22.57%.Table 2: Performances of different methods on Task-1.Travel time is reported. The average improvement is 5.52%4b6a6c6e8RandomPretrai

Meta-reinforcement learning. Meta reinforcement learn-ing aims to solve a new reinforcement learning task by lever-aging the experience learned from a set of similar tasks. Currently, meta-reinforcement learning can be categorized into two different groups. The first group approaches (Duan et al. 2016; Wang et al. 2016; Mishra et al. 2018) use an

Related Documents:

In reinforcement learning this amounts to acquiring a policy for a new task with limited experience, for which there are two main approaches: 1. Gradient-based Methods - Gradient-based meta-learning methods maintain a meta-parameter , which is used as the initialization parameter to standard ma-chine learning and reinforcement learning algorithms,

of study designs. These approaches include meta-study, meta-summary, grounded formal theory, meta-ethnography, and qualitative meta-synthesis. In this workshop, we will focus on qualitative meta-synthesis by presenting a six-step approach for conducting this type of systematic review and sharing our procedures and results from our own studies.

Using a retaining wall as a case-study, the performance of two commonly used alternative reinforcement layouts (of which one is wrong) are studied and compared. Reinforcement Layout 1 had the main reinforcement (from the wall) bent towards the heel in the base slab. For Reinforcement Layout 2, the reinforcement was bent towards the toe.

Footing No. Footing Reinforcement Pedestal Reinforcement - Bottom Reinforcement(M z) x Top Reinforcement(M z x Main Steel Trans Steel 2 Ø8 @ 140 mm c/c Ø8 @ 140 mm c/c N/A N/A N/A N/A Footing No. Group ID Foundation Geometry - - Length Width Thickness 7 3 1.150m 1.150m 0.230m Footing No. Footing Reinforcement Pedestal Reinforcement

Meta-Analysis “Meta-analysis is a statistical technique for combining the results of independent, but similar, studies to obtain an overall estimate of treatment effect.” “While all meta-analyses are based on systematic review of literature, not all systematic reviews necessarily include meta-

Indeed, the computations required for the most basic meta-analytic work are so trivial that in my own meta-analytic work of the last 30 years or so, I have never felt the need to use a software package that "does meta-analysis." Good software for meta-analytic procedures can, of course, be a great time saver. However, a drawback to the .

Meta-analysis using Stata Prepare data for meta-analysis Declaring a meta-analysis model Declaring a meta-analysis model In addition to effect sizes and their standard errors, one of the main components of your MA declaration is that of an MA model. metaoffers three models: random-effects (random), the

ner, Gladys Thomas, Charles McKinney, Mary Pelfrey, Christine Qualls, Dora Turner, David Petry, Cleone Gor don, Dorothy Scruggs, Phyllis Rice, Jacquelyn White, Rowena Napier, William Smith, Annie Smith, Ruth Ann Workman, Barbara Johnson and Letha Esque. The awards were presented by MU President Robert B. Hayes on March 4. Faculty meet Tuesday A general faculty meeting has been scheduled for .