2018 12 New RL For Opt Ctrl And Graph Games- CDC Workshop .

3y ago
9 Views
2 Downloads
2.02 MB
73 Pages
Last View : 28d ago
Last Download : 3m ago
Upload by : Mia Martinelli
Transcription

F.L. LewisNational Academy of InventorsMoncrief-O’Donnell Chair, UTA Research Institute (UTARI)The University of Texas at Arlington, USAandQian Ren Consulting Professor, State Key Laboratory ofSynthetical Automation for Process IndustriesNortheastern University, Shenyang, ChinaNew Developments in Integral Reinforcement Learning:Continuous-time Optimal Control and GamesSupported by :ONRUS NSFTalk available online athttp://www.UTA.edu/UTARI/acsSupported by :China NNSFChina Project 111

Invited byManfred MorariKonstantinos GatsisPramod KhargonekarGeorge Pappas

New Research ResultsIntegral Reinforcement Learning for Online Optimal ControlIRL for Online Solution of Multi-player GamesMulti‐Player Games on Communication GraphsOff‐Policy LearningExperience ReplayBio-inspired Multi-Actor CriticsOutput Synchronization of Heterogeneous MASApplications to:MicrogridRoboticsIndustry Process Control

Optimality and GamesOptimal Control is Effective for:Aircraft AutopilotsVehicle engine controlAerospace VehiclesShip ControlIndustrial Process ControlMulti-player Games Occur in:Networked Systems Bandwidth AssignmentEconomicsControl Theory disturbance rejectionTeam gamesInternational politicsSports strategyBut, optimal control and game solutions are found byOffline solution of Matrix Design equationsA full dynamical model of the system is needed

Optimal Control- The Linear Quadratic Regulator (LQR) User prescribed optimization criterionV ( x(t )) ( xT Qx u T Ru ) d t(Q, R)0 PA AT P Q PBR 1 B T POff-line Design LoopUsing AREK R 1 BT PControlKuSystemx A x B uxAn Offline Design Procedurethat requires Knowledge of system dynamics model (A,B)System modeling is expensive, time consuming, and inaccurateOn-line real-timeControl Loop

Adaptive Control is online and works for unknown systems.Generally not OptimalOptimal Control is off-line,and needs to know the system dynamics to solve design eqs.We want to find optimal control solutionsOnline in real-timeUsing adaptive control techniquesWithout knowing the full dynamicsFor nonlinear systems and general performance indicesBring together Optimal Control and Adaptive ControlReinforcement Learning turns out to be the key to this!

BooksF.L. Lewis, D. Vrabie, and V. Syrmos,Optimal Control, third edition, John Wiley andSons, New York, 2012.New Chapters on:Reinforcement LearningDifferential GamesD. Vrabie, K. Vamvoudakis, and F.L. Lewis,Optimal Adaptive Control and DifferentialGamesbyReinforcement LearningPrinciples, IET Press,2012.

F.L. Lewis and D. Vrabie,“Reinforcement learning andadaptive dynamic programmingfor feedback control,”IEEE Circuits & SystemsMagazine, Invited Feature Article,pp. 32-50, Third Quarter 2009.IEEE Control Systems Magazine,F. Lewis, D. Vrabie, and K.Vamvoudakis,“Reinforcement learning andfeedback Control,” Dec. 2012

Multi‐player Game SolutionsIEEE Control Systems Magazine,Dec 2017

RL for Markov Decision Processes ( X ,U , P, R)X states,U controlsP Probability of going to state x’ from state xgiven that the control is uR Expected reward on going to state x’ fromstate x given that the control is uExpected Value of a policy ( x, u)k TVk ( x) E {J k ,T xk x} E { i k ri xk x} i kOptimal control problemdetermine a policy ( x, u) to minimize the expected future costk T* i koptimal policy ( x, u ) arg min Vk ( s ) arg min E { ri xk x} . k TVk ( x) min E { optimal value V ( x) min *ki kPolicy IterationPolicy evaluation by Bellman eq.Policy ImprovementDiscrete Statei ki kri xk x}.V j ( x) j ( x, u ) Pxxu ' Rxxu ' V j ( x ') ux' j 1 ( x, u ) arg min P R V j ( x ') uuxx 'uxx 'for all x X .for all x X .x'Policy Evaluation equation is a system of N simultaneous linear equations, one for each state.Policy Improvement makesV ' ( x) V ( x)R.S. Sutton and A.G. Barto, Reinforcement Learning– An Introduction, MIT Press, Cambridge, Massachusetts, 1998.D.P. Bertsekas and J. N. Tsitsiklis, Neuro‐Dynamic Programming, Athena Scientific, MA, 1996.W.B. Powell, Approximate Dynamic Programming: Solving the Curses of Dimensionality, Wiley, New York, 2009.

RL ADP has been developed for Discrete-Time Systemsxk 1 f ( xk , uk )Discrete‐Time System Hamiltonian FunctionH ( xk , V ( xk ), h) r ( xk , h( xk )) Vh ( xk 1 ) Vh ( xk ) Directly leads to temporal difference techniques System dynamics does not occur Two occurrences of value allow APPROXIMATE DYNAMIC PROGRAMMING methodsContinuous‐Time System Hamiltonian Functionx f ( x, u ) V V V H ( x,xrxu, u ) V r ( x, u ) (,) f ( x, u ) r ( x, u ) x x x TTLeads to off‐line solutions if system dynamics is knownHard to do on‐line learning How to define temporal difference? System dynamics DOES occur Only ONE occurrence of value gradientHow can one do Policy Iteration for Unknown Continuous‐Time Systems?What is Value Iteration for Continuous‐Time systems?How can one do ADP for CT Systems?

Bertsekas- Neurodynamic ProgrammingDiscrete-Time SystemsAdaptive (Approximate) Dynamic ProgrammingFour ADP Methods proposed by Paul WerbosCritic NN to approximate:Heuristic dynamic programmingValue IterationValueV ( xk )Dual heuristic programmingGradient V xAD Heuristic dynamic programming(Watkins Q Learning)Q function Q ( xk , u k )AD Dual heuristic programmingGradients Q, xAction NN to approximate the ControlBertsekas- Neurodynamic ProgrammingBarto & Bradtke- Q-learning proof (Imposed a settling time) Q u

CT Systems‐ Derivation of Nonlinear Optimal RegulatorTo find online methods for optimal controlNonlinear System dynamicsCost/valueFocus on these two equationsx f ( x, u ) f ( x ) g ( x )u ttV ( x(t )) r ( x, u ) dt (Q( x) u T Ru ) dtLeibniz givesDifferential equivalentBellman Equation, in terms of the Hamiltonian function V V V H ( x,, u ) V r ( x, u ) x r ( x, u ) x x x T f ( x) g ( x)u r ( x, u ) 0 H 0 uStationarity conditionStationary Control Policyu h( x) 1 2 R 1 g T ( x)THJB equationT V xTProblem‐ System dynamicsshows up in Hamiltonian** dV * 1 T dV1 dV gR g f Q ( x) 4 0 dx dx dx Off‐line solutionHJB hard to solve. May not have smooth solution.Dynamics must be known, V (0) 0

CT Policy Iteration – a Reinforcement Learning TechniqueGiven any admissible policy u ( x) h( x)The cost is given by solving the CT Bellman equationT V V Scalar equation0 , u) f ( x , u ) r ( x, u ) H ( x, x x TUtility r ( x, u ) Q ( x) u RuPolicy Iteration SolutionPick stabilizing initial control policy h0 ( x)Policy Evaluation ‐ Find cost, Bellman eq. V j 0 f ( x , h j ( x )) r ( x , h j ( x )) x V j (0) 0TPolicy improvement ‐ Update controlh j 1 ( x ) 1 12TR g ( x) V j xConverges to solution of HJBTT** dV * 1 T dV1 dV f Q ( x) 4 gR g0 dx dx dx Convergence proved by Leake and Liu1967,Saridis 1979 if Lyapunov eq. solvedexactly Beard & Saridis used Galerkin Integrals tosolve Lyapunov eq. Abu Khalaf & Lewis used NN to approx. Vfor nonlinear systems and provedconvergenceFull system dynamics must be knownOff‐line solutionM. Abu-Khalaf, F.L. Lewis, and J. Huang, “Policyiterations on the Hamilton-Jacobi-Isaacs equation for Hinfinity state feedback control with input saturation,”IEEE Trans. Automatic Control, vol. 51, no. 12, pp.1989-1995, Dec. 2006.

Policy Iterations for the Linear Quadratic RegulatorSystemx Ax Bu TTCost V ( x(t )) ( x Qx u Ru ) d xT (t ) Px(t )tDifferential equivalent is the Bellman equation V V TTTTT0 H ( x,, u ) V xT Qx u T Ru 2 x x Qx u Ru 2 x P ( Ax Bu ) x Qx u Ru x x TGiven any stabilizing FB policyu KxThe cost value is found by solving Lyapunov equation Bellman equation0 ( A BK )T P P( A BK ) Q K T RKOptimal Control isu R 1 BT Px KxAlgebraic Riccati equation0 PA AT P Q PBR 1 B T PFull system dynamics must be knownOff‐line solution

LQR Policy iteration Kleinman algorithm1. For a given control policy u K j x solve for the cost:0 Aj T Pj Pj Aj Q K j T RK jAj A BK jBellman eq. Lyapunov eq.Matrix equation2. Improve policy:K j 1 R 1 BT Pj If started with a stabilizing control policy K 0 the matrix Pjmonotonically converges to the unique positive definite solution ofthe Riccati equation. Every iteration step will return a stabilizing controller. The system has to be known.OFF‐LINE DESIGNMUST SOLVE LYAPUNOV EQUATION AT EACH STEP.Kleinman 1968

Integral Reinforcement Learningvalue t T ttt TV ( x(t )) r ( x, u ) d Work of Draguna Vrabie 2009 r ( x, u ) d r ( x, u ) d Key Idea US PatentLemma 1 – Draguna Vrabie V V 0 (,)(,)(,, u ), V (0) 0fxu rxu Hx xx TIs equivalent toBad Bellman EquationIntegral reinf. form (IRL) for the CT Bellman eq.t TV ( x(t )) r ( x, u ) d t V ( x(t T )), V (0) 0Good Bellman EquationSolves Bellman equation without knowing f(x,u)Allows definition of temporal difference error for CT systemst Te(t ) V ( x(t )) r ( x, u ) d t V ( x(t T ))

Integral Reinforcement Learning (IRL)- Draguna VrabieIRL Policy iterationPolicy evaluation‐ IRL Bellman Equationt TCost updateVk ( x(t )) CT Bellman eq. r ( x, u ) dtk Vk ( x(t T ))tf(x) and g(x) do not appearTEquivalent to V V 0 , u) f ( x, u ) r ( x, u ) H ( x, x x Solves Bellman eq. (nonlinear Lyapunov eq.) without knowing system dynamicsPolicy improvement 1 TControl gain update uk 1 hk 1 ( x) 1 2 R g ( x) Vk xg(x) needed for control updateInitial stabilizing control is neededConverges to solution to HJB eq.TT** dV * 1 T dV1 dV f Q ( x) 4 gR g0 dx dx dx D. Vrabie proved convergence to the optimal value and controlAutomatica 2009, Neural Networks 2009

Approximate Dynamic Programming ImplementationValue Function Approximation (VFA) to Solve Bellman Equation– Paul Werbos (ADP), Dimitri Bertsekas (NDP)t TVk ( x(t )) Q( x) uTk Ruk dt Vk ( x(t T ))tApproximate value by Weierstrass Approximator NetworkWkT ( x(t )) t T Q( x) uTkV W ( x)T Ruk dt WkT ( x(t T ))tWkT ( x(t )) ( x(t T )) t T Q( x) uTk Ruk dttregression vectorOptimal ControlandAdaptive Controlcome togetherOn this slide.Because of RLScalar equationwith vector unknownsReinforcement on time interval [t, t T]Same form as standard System ID problems in Adaptive ControlNow use RLS or batch least-squares along the trajectory to get new weights WkThen find updated FBT ( x(t )) Vuk 1 hk 1 ( x) 1 2 R 1 g T ( x) k 1 2 R 1 g T ( x) Wk x xt() Direct Optimal Adaptive Control for Partially Unknown CT Systems

Solving the IRL Bellman EquationSolve for value function parameters p11 p 12p12 p22 W T p11p12p22 Need data from 3 time intervals to get 3 equations to solve for 3 unknownsWkT ( x(t )) WkT ( x(t )) ( x(t T )) t T Q( x) utTkWTk Ruk dt (t )t 2T ( x(t T )) W ( x(t T )) ( x(t 2T )) Q( x) uk T Ruk dt (t T )Tkt TWkT ( x(t 2T )) WkT ( x(t 2T )) ( x(t 3T )) t 3T Q( x) uTk Ruk dt (t 2T )t 2TPut togetherWkT ( x(t )) ( x(t T )) ( x(t 2T )) (t ) (t T ) (t 2T ) Now solve by Batch least-squaresOr can use Recursive Least-Squares (RLS)

Integral Reinforcement Learning (IRL)Solve Bellman Equation - Solves Lyapunov eq. without knowing dynamicsWk T (x(t )) ( x (t T )) t T x( )T (Q K k T RK k ) x( )d (t , t T ) x(t ), (t , t T ), x(t T ) tobserve x(t T)observe x(t)observe cost integral (t , t T )observe x(t 2T)observe cost integralobserve cost integral (t T , t 2T ) (t 2T , t 3T )update Ptapply uk Kkxapply uk Kkxapply uk KkxData set at time [t,t T)t TDo RLS until convergence to PkOr use batch least-squaresThis is a data-based approach that usesmeasurements of x(t), u(t)Instead of the plant dynamical model.update Pt 2Tupdate Pt 3TA is not needed anywhereupdate control gainK k 1 R 1 BT Pk

Gain update (Policy)Interval T can varyKk012345kControluk (t ) K k x(t )TtReinforcement Intervals T need not be the sameThey can be selected on‐line in real timeContinuous‐time control with discrete gain updates

Persistence of ExcitationWkT ( x(t )) ( x(t T )) t T Q( x) uTk Ruk dttRegression vector must be PERelates to choice of reinforcement interval T

ImplementationPolicy evaluationNeed to solve onlinet TWkT (x(t )) ( x(t T )) x ( )T (Q K k T RK k ) x ( )d (t , t T )tAdd a new state Integral Reinforcement x T Q x u T R uThis is the controller dynamics or memory

Draguna VrabieDirect Optimal Adaptive ControllerSolves Riccati Equation Online without knowing A matrixCT time Actor‐Critic StructureRun RLS or use batch L.S.To identify value of current controlCriticZOH TT KIRL requires aDynamicControlSystemw/ MEMORYVUpdate FB gain afterCritic has convergedActorT x T Q x u T R uuSystemx A x B uxA hybrid continuous/discrete dynamic controllerwhose internal state is the observed cost over the intervalReinforcement interval T can be selected on line on the fly – can change

Optimal Adaptive IRL for CT systemsD. Vrabie, 2009Actor / Critic structure for CT Systemst TVk ( x(t )) Reinforcement learning r ( x, u ) dtk Vk ( x(t T ))tTheta waves 4-8 Hzuk 1 hk 1 ( x) 1 2 R 1 g T ( x) Vk xMotor control 200 HzA new structure of adaptive controllers

Data-driven Online Adaptive Optimal ControlDDOUser prescribed optimization criterionJ (Q, R)t T K k RK k ) x( ) d x1 (t TT ) Pk x(t T ) x ( A)(TQPOn-line Performance Loop0 PA Q PBR B PtxT (t ) Pk x(t ) TTTTK kK 1 RR 11BBTPPkControlKuSystemx A x B uxOn-line Control LoopData set at time [t,t T) x(t ), (t , t T ), x(t T ) An Online Supervisory Control Procedurethat requires no Knowledge of system dynamics model AAutomatically tunes the control gains in real time to optimize a user given cost functionUses measured data (u(t),x(t)) along system trajectories

Optimal Control Design Allows a Lot of Design Freedom

IRL Value Iteration - Draguna VrabieIRL Policy iteration Initial stabilizing control is neededPolicy evaluation‐ IRL Bellman Equationt TCost updateVk ( x(t )) r ( x, u ) dt Vk ( x(t T ))kCT PI Bellman eq. Lyapunov eq.tPolicy improvementControl gain updateuk 1 hk 1 ( x) 1 2 R 1 g T ( x) Vk xTT** dV * 1 T dV1 dV f Q ( x) 4 gR g0 dxdxdx Converges to solution to HJB eq.IRL Value iteration Initial stabilizing control is NOT neededValue evaluation‐ IRL Bellman EquationCT VI Bellman eq.t TCost updateVk 1 ( x(t )) r ( x, u ) dtk Vk ( x(t T ))tPolicy improvementControl gain updateuk 1 hk 1 ( x) 1 2 R 1 g T ( x) Vk 1 xConverges if T is small enough

Optimal Adaptive IRL for CT systemsD. Vrabie, 2009Actor / Critic structure for CT Systemst TVk ( x(t )) Reinforcement learning r ( x, u ) dtk Vk ( x(t T ))tTheta waves 4-8 Hzuk 1 hk 1 ( x) 1 2 R 1 g T ( x) Vk xMotor control 200 HzA new structure of adaptive controllers

theta rhythms 4-10 HzDeliberativeevaluationLimbic systemcontrolMotor control 200 HzDoya, Kimura, Kawato 2001

Summary of Motor Control in the Human Nervous SystemCerebral cortexMotor areasLong termMemoryfunctionsBasalgangliaLimbic SystemThalamusReinforcementLearning- dopaminepicture by E. StinguD. Vrabiegamma rhythms 30-100 HzUnsupervisedlearningHippocampustheta rhythms 4-10 HzCerebellumShort termSupervisedlearningBrainstem(eye movement)inf.oliveSpinal cordMotor control 200 HzreflexKenji scle contractionand movementHierarchy of multiple parallel loops

Synchronous Real‐time Data‐driven Optimal Control79

Optimal AdaptiveIntegral Reinforcement Learning for CT systemsD. Vrabie, 2009Policy Iteration gives the structure needed for online optimal solutionActor / Critic structure for CT Systems t TVk ( x(t )) r ( x, u ) dtk Vk ( x(t T ))tTheta waves 4-8 Hzuk 1 hk 1 ( x) 1 2 R 1 g T ( x) Vk xMotor control 200 HzA new structure of adaptive controllers

Synchronous Online Solution of Optimal Control for Nonlinear SystemsKyriakos VamvoudakisCritic NetworkTake VFA asV ( x) Wˆ1T 1 ( x) ( x), V ( x) T1 Wˆ1tThen IRL Bellman eqbecomesV ( x(t )) Wˆ1T ( x(t T )) Q( x) ut t T Q( x) uTkTk Ruk dt V ( x(t T )) Ruk dt Wˆ1T ( x(t ))t TAction Network for Control Approximationu ( x) 12 R 1 g T ( x) 1T Wˆ2 ,Define ( x(t )) ( x(t )) ( x(t T ))tBellman eq becomes ( x(t ))T Wˆ1 1 ˆT ˆ 0()Qx WDW122 4 t T

Data‐driven Online Synchronous Policy Iteration using IRLDoes not need to know f(x)Vamvoudakis & VrabieTheorem (Vamvoudakis & Vrabie)‐ Online Learning of Nonlinear Optimal ControlLet Wˆ1 a1 ( x(t )) ( x(t )) ( x(t T )) ( x(t )) 1 ( x(t ))T ( x(t )) 2be PE. Tune critic NN weights ast 1 ˆT Tˆˆ ( x(t )) W1 Q ( x) W2 D1W2 d 4 t T Learning the ValueTune actor NN weights as Wˆ2 a2 F2Wˆ2 F1 ( x(t ))T Wˆ1 14 a2 D1 ( x)Wˆ2 ( x(t ))T 1 ( x(t ))T ( x(t )) 2Wˆ1Learning the control policyThen there exists an N0 such that, for the number of hidden layer units N N 0W W Wˆthe closed‐loop system state, the critic NN error1and the actor NN errorW 2 W1 Wˆ2Data set at time [t,t T) x(t ), (t T , t ), x(t T ) 1are UUB bounded.1

Lyapunov energy‐based Proof:11L(t ) V ( x) tr (W 1T a1 1W 1 ) tr (W 2T a2 1W 2 ).22V(x) Unknown solution to HJB eq.TT dV 1 T dV1 dV fQxgRg0 () 4 dxdxdx Guarantees stabilityW 1 W1 Wˆ1W 2 W1 Wˆ2W1 Unknown LS solution to Bellman equation for given NH ( x,W1 , u ) W1T 1 ( f gu ) Q( x) uT Ru H

Synchronous Online Solution of Optimal Control for Nonlinear SystemsK.G. Vamvoudakis and F.L. Lewis, “Online actor-critic algorithm to solve the continuous-time infinitehorizon optimal control problem,” Automatica, vol. 46, no. 5, pp. 878-888, May 2010.A new form of Adaptive Control with TWO tunable networksAdaptive Critic structureReinforcement learning Wˆ2 a2 F2Wˆ2 F1 ( x(t ))T Wˆ1 14 a2 D1 ( x)Wˆ2 Wˆ1 a1 ( x(t ))T 1 ( x(t ))T ( x(t )) 2 ( x(t )) 1 ( x(t ))T ( x(t )) 2t 1Tˆ ( x(t )) W1 Q( x) Wˆ2T D1Wˆ2 d 4 t T Wˆ1Two Learning NetworksTune them SimultaneouslyA new structure of adaptive controllers

A New Class of Adaptive ControlV ( x) W T ( x)Identify theperformance valueOptimal AdaptiveIdentify thesystem modelIndirect AdaptiveIdentify theControllerDirect AdaptivePlantcontroloutput

Data‐driven Online Solution of Differential GamesSynchronous Solution of Multi‐player Non Zero‐sum Games

Multi‐player Differential GamesMulti‐player Game SolutionsIEEE Control Systems Magazine,Dec 2017

Games on Communication Graphs500 BC孙子兵法Sun Tz bin fa

F.L. Lewis, H. Zhang, A. Das, K.Hengster-Movric, Cooperative Controlof Multi-Agent Systems: Optimal Designand Adaptive Control, Springer-Verlag,2013Key PointLyapunov Functions and Performance IndicesMust depend on graph topologyH. Zhang, F.L. Lewis, and Z. Qu, "Lyapunov, Adaptive, and Optimal Design Techniques for Cooperative Systems onDirected Communication Graphs," IEEE Trans. Industrial Electronics, vol. 59, no. 7, pp. 3026‐3041, July 2012.Hongwei Zhang, F.L. Lewis, and Abhijit Das“Optimal design for synchronization of cooperative systems: state feedback, observer and output feedback,”IEEE Trans. Automatic Control, vol. 56, no. 8, pp. 1948-1952, August 2011.

Graphical GamesSynchronization‐ Cooperative Tracker Problemx i Axi Bi ui ,Node dynamicsxi (t ) n ,x0(t)ui (t ) mix 0 Ax0Target generator dynamicsxi (t ) x0 (t ),

But, optimal control and game solutions are found by Offline solution of Matrix Design equations . D. Vrabie, K. Vamvoudakis, and F.L. Lewis, Optimal Adaptive Control and Differential Games by Reinforcement Learning Principles, IET Press, 2012. Books F.L. Lewis, D. Vrabie, and V. Syrmos,

Related Documents:

Bruksanvisning för bilstereo . Bruksanvisning for bilstereo . Instrukcja obsługi samochodowego odtwarzacza stereo . Operating Instructions for Car Stereo . 610-104 . SV . Bruksanvisning i original

10 tips och tricks för att lyckas med ert sap-projekt 20 SAPSANYTT 2/2015 De flesta projektledare känner säkert till Cobb’s paradox. Martin Cobb verkade som CIO för sekretariatet för Treasury Board of Canada 1995 då han ställde frågan

service i Norge och Finland drivs inom ramen för ett enskilt företag (NRK. 1 och Yleisradio), fin ns det i Sverige tre: Ett för tv (Sveriges Television , SVT ), ett för radio (Sveriges Radio , SR ) och ett för utbildnings program (Sveriges Utbildningsradio, UR, vilket till följd av sin begränsade storlek inte återfinns bland de 25 största

Hotell För hotell anges de tre klasserna A/B, C och D. Det betyder att den "normala" standarden C är acceptabel men att motiven för en högre standard är starka. Ljudklass C motsvarar de tidigare normkraven för hotell, ljudklass A/B motsvarar kraven för moderna hotell med hög standard och ljudklass D kan användas vid

LÄS NOGGRANT FÖLJANDE VILLKOR FÖR APPLE DEVELOPER PROGRAM LICENCE . Apple Developer Program License Agreement Syfte Du vill använda Apple-mjukvara (enligt definitionen nedan) för att utveckla en eller flera Applikationer (enligt definitionen nedan) för Apple-märkta produkter. . Applikationer som utvecklas för iOS-produkter, Apple .

Test Name Score Report Date March 5, 2018 thru April 1, 2018 April 20, 2018 April 2, 2018 thru April 29, 2018 May 18, 2018 April 30, 2018 thru May 27, 2018 June 15, 2018 May 28, 2018 thru June 24, 2018 July 13, 2018 June 25, 2018 thru July 22, 2018 August 10, 2018 July 23, 2018 thru August 19, 2018 September 7, 2018 August 20, 2018 thru September 1

och krav. Maskinerna skriver ut upp till fyra tum breda etiketter med direkt termoteknik och termotransferteknik och är lämpliga för en lång rad användningsområden på vertikala marknader. TD-seriens professionella etikettskrivare för . skrivbordet. Brothers nya avancerade 4-tums etikettskrivare för skrivbordet är effektiva och enkla att

Den kanadensiska språkvetaren Jim Cummins har visat i sin forskning från år 1979 att det kan ta 1 till 3 år för att lära sig ett vardagsspråk och mellan 5 till 7 år för att behärska ett akademiskt språk.4 Han införde två begrepp för att beskriva elevernas språkliga kompetens: BI