Time-triggered Communication For Distributed Control .

1y ago
16 Views
2 Downloads
624.19 KB
12 Pages
Last View : 2d ago
Last Download : 2m ago
Upload by : Nadine Tse
Transcription

In Proc. of the 23rd Intl. Digitial Avionics and System Control Conference (DASC'04), 2004.Time-triggered Communication for Distributed Control Applications ina Timed Computation ModelGuido Menkhaus, Michael Holzmann and Sebastian Fischmeister, University of Salzburg, AustriaAbstractDistributed real-time control applications consist ofsets of tasks that interact with the physical worldthrough sensors and actuators and are executed on adispersed set of locations that are interconnected bya communication subsystem. Timeliness and safetyrequirements of the application demanddeterministic execution of tasks and predictivecommunication. Deterministic and predictablesystems can be build, if upper bounds for processingand communication latencies are known and eventarrivals have deterministic distributions.In this paper we describe the timing definitionlanguage (TDL) system architecture implementingtime-triggered computation and time-triggeredcommunication. The TDL system implements thetimed computation model and its architectureconsists of two parts: TDL-Exe (for time and valuedeterministic execution of tasks) and TDL-Com (forpredictive communication of values). The paperpresents TDL-Exe and describes implementationdetails of TDL-Com.Keywords: real-time control systems, timedcomputation model, time-triggered communication,system architecture1IntroductionMost modern control applications are implementedin software, dedicated to perform specific tasks andinteracting with the physical world through sensorsand actuators. Physical and software processes differconceptually in their treatment of the role of time[1]. Physical processes evolve in real-time andsoftware processes evolve in so-called soft-time.Soft-time coincides with real-time only at theinstances of input and output activities. Timed1 of 12real-time programming deals with the activities ofmapping soft-time to real-time [2].Real-time control systems are often implemented asdistributed systems where a set of computationalnodes is interconnected by a communication system.The dispersal of application resources,organizational issues, and fault-tolerancemechanisms are design rationales for distributedsystems. In such a system, each node executes a setof tasks contributing a specific functionality to theoverall control system. Individual tasks cooperateand results of one task may require to becommunicated to tasks located at different nodes.The design of a real-time distributed control systemcan broadly be divided into global and local designissues [3]: Global design decisions. Global designdecisions deal with activities that are relevant tomore than one computational node (e.g.,communication concerns). These activitiesmust be coordinated among the computationalnodes and require to work consistently togethertowards the goal of the control system. Local design decisions. Local design decisionsare concerned with activities within a singlecomputational node (e.g., the set of tasksreleased on this node need to be invoked andexecuted in a consistent and synchronizedway).The correctness of a real-time control systemdepends on the computed values and on the point intime, at which these values are available. A keyproblem of distributed real-time systems is thetimely interaction between the system and theenvironment while maintaining consistency andcorrectness of data. To achieve consistency in

distributed systems in the time and value domain,the system must be made deterministic. Beingdeterministic is mandatory for software systems thatcontrol physical systems that are ruled bydeterministic physical laws [4]. Non-determinism. A non-deterministic systemdoes not have a unique output sequence to agiven input sequence. An external observercannot consistently predict the behavior of thesystem. In a deterministic (i.e. predictable)system the development of future states of thesystem can be predicted.elapsed. The release and the termination of atask are time-triggered events emitted at thestart and the end of the LCT. The LCT of a taskis always greater or equal to its worst caseexecution time (WCET). Logical transmission time. The LTT isdetermined by a time-triggered communicationschedule. The schedule defines when thecommunication system transfers values fromone computational node to another and whenthe next transmission will take place. The LTTof transferring a value from one node to anotheris always greater or equal to the worst casecommunication time (WCCT) [5]. Value determinism. A systems is said to bevalue deterministic, if the same sequence ofinputs produces the same sequence of outputs. Time determinism. If the system produces forthe same sequence of inputs the same sequenceof outputs at always the same time, it istime-deterministic.The LTT is important for the inter-nodecommunication and modeled in the global designdecisions. The LCT is determined during the localdesign decisions and deals with computationalresources on a single node. Centralized Application. For controlapplication executed on a single computationalnode, the length of the LET equals the length ofLCT (see Figure 1). Local intra-nodecommunication is instantaneous, which meansthat reading inputs and writing output happensconceptually in zero time.Determinism in time-triggered systems isaccomplished by introducing a logical executiontime (LET) and a logical computation time (LCT)for each task as well as a logical transmission time(LTT) for communicating messages betweencomputational nodes. Logical execution time. Input and output portsdefine logical points of interaction betweentasks. The start of the LET marks the point intime when the values from input ports to a taskare read. The end of the LET marks the point intime when the results of the computation of atask become available at the output ports toother tasks or actuators. Even if the output of atask became available prior the end of the LET,the output values will not be released prior tothe expiration of the LET. Logical computation time. The LCT specifies afixed time interval in real-time in which thetask is active. The task is schedulable, if thereis enough soft-time available within the intervalto execute the task. According to a schedulingscheme, the task starts after it has beenreleased, may be preempted, but resumes andcompletes its execution before the LCT has2 of 12 Decentralized Application. To sustain theprinciple of instantaneous communication in adistributed control application, the LETconsists of the LCT and the LTT (see Figure 2).ReleaseTerminateLogical execution time Logical computation timeTask tt TtReadinginputportsStartSuspend Resume StopWritingoutputportsFigure 1. Logical execution time in a centralizedapplication.Information necessary to determine the LET of atask is the invocation frequency that the control lawof the controlled object requires. The length of the

Logical execution timeReleaseTerminateLogical computation timeLogical transmission timeTask tCommunicationt TtReadinginputportsStartSuspend Resume Stop Time slot k Time slot lWritinglocal and nonlocal output portsFigure 2. Logical execution time in a decentralized application.LCT must be larger than the WCET. Theschedulability test of the TDL compiler verifies thisproperty. The length of the LTT is determined by theWCCT, which represents the upper bound requiredto transmit a message from the sender to the receiverover the network.In this paper we present the TDL system architecturethat allows for the design of control applicationsusing time-triggered computation and time-triggeredcommunication. The TDL [6] bases on the conceptof a fixed LET of tasks. The communication systemof TDL provides the programmer with timingabstraction for implementing distributed controlsystems with hard real-time constraints. For anend-to-end time-triggered approach, it is necessaryto provide time and value determinism for the globaland local design.The remaining of the paper is structured as follows:Section 2 presents the motivation of the work.Section 3 discusses related work such as TTP andFlexRay. Section 4 presents the TDL systemarchitecture for centralized and decentralizedsystem. Section 5 provides details about theimplementation and characteristics of thedecentralized system are discussed in Section 6.Section 7 concludes the paper.2MotivationThe software implementation of distributed real-timecontrol applications must be predictable yet flexible.Predictable, because hard real-time applications aretime and safety critical. Flexible, because all tasksdo not have to be specified fully pre run-time.Non-deterministic system require over-sizing of3 of 12computing and communication resources to avoidtime delays in worst load case situations: forexample in dynamic control systems, in which theupper limit of processing and communicationlatencies is unknown and the event or task arrivalhave non-deterministic distributions [7].Event-triggered systems offer more choices forscheduling task computation and communication.However, it is difficult to build deterministic systemsutilizing event-triggered communication togetherwith jitter introduced by communication-errorcorrection.In time-triggered systems, communication andcomputation of tasks are predictable. Predictabilityimplies deterministic temporal system behaviorunder the imposed timing and functional constraints.For those systems the functional timingrequirements are always met. Static schedules areused to plan task executions on each computationalnode. But for this, the release time of tasks must beknown a priori. Time-triggered communicationprovides predictable message transmission. Staticschedules drive the communication system anddetermine the timely transmission of messages.3Related WorkA number of time-triggered systems have beendevised in the context of distributed controlapplications for safety critical hard real-timesystems.The time-triggered protocol (TTP) [8] providestime-triggered communication of messages andstatic cyclic scheduling of application tasks. Amember of the TTP family is the time-triggeredprotocol TTP/C, intended for safety critical hardreal-time applications, and TTP/A, intended forlow-cost field bus applications. TTP/C providesdistributed fault tolerant clock synchronization, errordetection, membership service, and redundancymanagement. Cyclic scheduling of application tasksis described with the task descriptor lists (TADL).The TADL specifies the time of starting andstopping a task and the WCET of a task. It describesthe temporal behavior of the system before thesystems starts. The finishing time of a task isdetermined by the number and length of

preemptions. Results are immediately available toother tasks on the same node after the finishing time.A consortium of companies supports and promotesthe FlexRay Communications System [9, 10], whichis a communication infrastructure for high-speedcontrol systems targeting the automotive domain.The communication cycle of FlexRay consists of astatic and a dynamic segment. Each communicationcycle starts with the static segment. Similar to thetime-triggered protocol, all communication isdivided into slots, which the developer assigns toindividual nodes. Following the static segment, thedynamic segment is intended for aperiodic messagessuch as burst transmissions or diagnosis information.The dynamic segment utilizes the flexibletime-division multiple access (FTMA) protocolByteFlight [11] that uses message identifiers asmeans for messages scheduling. Applications usingFlexRay can be implemented with OSEKtime [12].OSEKtime is responsible for starting tasks accordingto a periodic task execution scheme (similar to theTADL) and it monitors the task deadlines. Thetime-triggered tasks can preempt each other.TTP and Flexray provide time-triggeredcommunication that ensures time determinism on aglobal level. However, on a local level, i.e., on asingle computational node, time determinism andvalue determinism cannot be guaranteed. We presentthe TDL system architecture for time-triggeredcomputation and communication that aims at valueand time determinism on a global and local level.4TDL System ArchitectureThe design process for a TDL control applicationcan be split into a local design process, targeting acentralized single processor solution, and a local andglobal design processes for distributed controlapplications. The run-time environment for acentralized processor TDL control application is theTDL runtime environment (TDL-Exe), consisting ofthe E machine [13]. The TDL-Communicationenvironment (TDL-Com) complements it fordistributed control application.We first present the activities of the local designprocess and the local runtime environment beforediscussing the global design process and the4 of 12TDL-Com.4.1Centralized SystemThe following steps lead to a TDL application forthe centralized, single processor solution (see Figure3).1. Task-Set Declaration. A TDL program can bemodeled using a visual task modeling tool [14].The essential idea of the timed computationmodel is time-triggered cyclic computation inwhich the LET, the LCT, and the WCETdescribe the timing behavior of a task.The most important programming abstractionsof TDL are modules, modes, tasks, and ports:The highest-level programming construct is amodule. A module declares a set of modes,which specify sets of tasks and other activitiesthat are executed periodically and in parallel. ATDL module can only be in one mode at a time,but can change from one mode to another at theend of a period. A module may have a startmode. If a module has a start mode it is anexecutable TDL program and the applicationdescribed by the TDL program starts executingin this mode. A task has a set of input ports,output ports and a set of drivers that handles thedata for the ports. Ports are logical points ofinterconnection between tasks and modes. Taskdrivers copy values from an output task port toan input task port, and there are drivers forsensor readings and actuator updates. Modedrivers read sensors and update mode ports,which are a subset of the task output ports.2. Task Timing Definition. A TDL mode specifiesthe invocation period, i.e., the length of onecomputation cycle. The LET of a task isdetermined with respect to the invocationperiod of the mode to which the task isassigned. It is calculated by dividing theinvocation period of the mode by the tasksfrequency.Task drivers, sensor drivers, and actuatordrivers differ in the fact that task and sensordrivers are called at the beginning and the end

SimulinksimulatesTDL modelgeneratesgeneratesModeling toolgeneratesTDL programApplicationsource codecompiles intocompiles and links intoApplicationobject codeE codeexecutesoff lineon linecallscallsDriver codeSystemlibrariescallsTDL Exe (E machine)Driver coderuns onPlatformSensorEnvironmentActuatorFigure 3. Centralized TDL system architecture for a single processor application.of the LET, whereas actuator drivers are treatedlike tasks having their own invocation period.If the set of tasks is schedulable, the TDLprogram is time-safe. However, time safetydepends on the correct analysis and calculationof the WCET of each task for the givenplatform.3. TDL Program. The visual modeling toolgenerates a TDL program.4. TDL Model. On the basis of a TDL program, aTDL model generator produces a Simulinkmodel [15]. Simulink is designed for thesimulation and modeling of control laws. Formodeling, Simulink provides a graphical userinterface for building models as blockdiagrams. It includes a comprehensive blocklibrary of sinks, sources, components, andconnectors. The Simulink model that resultsfrom the TDL program (the TDL model) can besimulated to validate the timing and functionalbehavior. This is especially helpful, if theapplication code has been modeled in Simulink.5 of 125. E code. The TDL compiler compiles a TDLprogram into E code for the E machine [16].The E code is a platform-independentassembler language that targets the E machine.The E machine executes E code that ensures thetiming consistency of the task and driverexecutions. The E code consists of a small setof instructions for basic control flow andprocessing, that allows for synchronous drivercalls, task scheduling and initializing theexecution of a set of E code instructions atsome point in time in the future.6. TDL-Exe (E machine). The TDL runtime partof the architecture is represented by theTDL-Exe consisting of the E machine. TheE machine is a virtual machine that executesthe platform-independent E code and calls theplatform-dependent application code.

N1N2TDL-ExeComFTpoints of interaction between tasks and theCom interface. The drivers copy values ofoutput ports of a task to the input ports of theCom interface, which forwards them to thecommunication network interface. The Cominterface has a set of output ports, whose valuesare destined for input ports of tasks, which arethen retrieved via tasks FTNatBusTDL-ComN3Figure 4. Distributed TDL System architecture.4.2Decentralized SystemThe TDL system architecture for a decentralized (orequivalently distributed) real-time control system ismodeled by a set of TDL nodes that areinterconnected by a real-time communicationsystem. Each node consists of a TDL-Exe and aTDL-Com part (see Figure 4).The TDL-Exe is not concerned with the globaldesign of the communication between nodes,because the synchronization is done implicitly bythe timing definitions of the tasks on each node.Each task provides the data items that need to betransmitted to a different node. However, the timingdefinitions of tasks on each node are designed inmutual agreement with tasks running on other nodes.TDL-Com supplies a data communication systemthat allows for transmitting values from task outputports of a TDL program to input ports of a taskrunning on a different node. TDL-Com uses atime-triggered communication subsystem totransmit data. It works autonomously: sending andreceiving of messages happens without anyinteraction from the application program.4.32. FT Interface. The FT interface allows foraccess to information related to fault tolerance.Redundantly produced and communicatedvalues have status fields, which provideinformation, for example, on the number ofactive replicas, the status of fault-tolerantcommunication, or the confidence values offusion algorithms.3. Native Interface. The native interface allowsfor access to platform specific services, such asthe error counter of the CAN controller.4.4TDL-Com ArchitectureTDL-Com InterfacesTDL-Com exposes three interfaces to TDL-Exe (seeFigure 4):1. Com Interface. The Com interface mediatesbetween TDL-Exe and the communicationsubsystem of TDL-Com. It allows theTDL-Exe to submit and retrieve data values viadrivers that are connected to input and outputports. Input and output ports define logicalFigure 5. Distributed TDL ArchitectureFigure 5 illustrates the TDL system architecture of anetwork node of a distributed TDL application.TDL-Exe provides the runtime environment which6 of 12

executes TDL applications. TDL-Com builds on atime-triggered communication subsystem. Thecommunication subsystem of each network nodeprocesses autonomously all communicationactivities. The communication network interface isthe interface between the communication controllerand the TDL-Com layer. The communicationnetwork interface contains messages, which are sentand received by the communication subsystem.TDL-Com reads data from the communicationnetwork interface and passes it to the TDL-Exelayer. TDL-Com writes data to the communicationnetwork interface to send data submitted by theTDL-Exe layer. The interface between TDL-Exeand TDL-Com is the Com interface that allows fordata exchange via drivers and ports.The communication schedule list determines thetemporal behavior of the communication subsystem.Each node stores such a list. This list specifies atwhich point in time (time slot) a node is allowed tosend messages and at which point in time it willreceive messages.Figure 5 shows an example of sending and receivinga message. Task T1 produces a value A and submitsit to TDL-Com in step 1. In Step 2, TDL-Comcopies value A to location L1 within thecommunication network interface (it will becommunicated in time slot 2 with message M1).When sending message M1 in Step 3, thecommunication subsystem reads the data from theindicated location L1 in the communication networkinterface, generates the message M1 and transmits it.In Step 4, the communication system receivesmessage M2 during time slot 3. It stores its data atthe indicated location L2. TDL-Com copies the dataitem D from the communication network interface tothe value D within the Com interface (Step 5).Finally, task T2 consumes the value in Step 6.Temporal Synchronization of TDL-Com. Asdescribed above, there are temporal dependenciesbetween the TDL-Exe, TDL-Com, and itssubsystems. For example, a value has to beproduced, copied, and marshaled before itstransmission. The dependencies result from the factthat the time-triggered communication subsystemruns autonomously and the progression of timedrives the timing of the whole node. The activitiesof the TDL-Com and the TDL-Exe layer need to besynchronized to the activities of the communicationsubsystem.From the point of view of TDL-Exe, communicatingvalues between tasks (reading input and writing tooutput ports) is transparent in centralized as well asin the decentralized applications. However, incentralized applications, the LCT determines theLET of a task (i.e., LCT LET). In decentralizedapplications, if values of output ports of this tasksneed to be transmitted to a task on a different node,the LET consists of the LCT plus the LTT (i.e., LET LCT LTT). The LTT always succeeds the LCT.LET includes the transmission time of a message.Consequently, from the point of view of TDL-Exe,inter-node communication happens conceptually inzero time (i.e., transparent to the single-nodecomputation model).TDL-Com and communication subsystemsynchronize their timing via a communicationschedule list. TDL-Exe is implicitly coordinatedwith the communication subsystem by the timingdefinitions of the tasks on each node. However, fromthe communication schedule list and the local designof timing definitions of the tasks may raise globaldesign restrictions (scheduling restrictions), thatneed to be resolved.4.5TDL-Com ToolchainThe construction of the communication schedule listof the communication subsystem is an off-lineactivity. The schedule is then used on-line (duringruntime) to ensure predictive communication.Figure 6 illustrates the off-line activities of theTDL-Com toolchain. Off-line. The off-line part of the toolchaindetermines the communication requirementsfor the distributed application and generates aglobal communication schedule list and theTDL-Com schedules for each network node.To generate the global communication schedulelist, the TDL-Com Compiler determines thecommunication requirements of the wholedecentralized application. It scans TDLprograms and modules and detects accesses to7 of 12

5non-local data ports. Access to remote portsresults in communication requirements.Optimization of communication can byachieved, for example, by packaging severaldata items within the same invocation periodinto a single message.A TDL-Com Compiler plug-in maps thecommunication requirements into the format ofa vendor-specific bus scheduling tool, whichgenerates the network schedule for a specificcommunication platform (e.g., TTP, FlexRay,TTCAN). The plug-in reads and analyzes thegenerated network schedule and provides theTDL-Com Compiler with information togenerate the TDL-Com schedule lists for eachnode. On-line. The TDL-Com layer providesinterfaces to TDL-Exe and to thecommunication subsystem (see Figure 5). Itcopies values to be transmitted between theCom interface and the communicationcontroller interface and vice versa, marshalsvalues into communication frames and supportsvoting for fault tolerance. The behavior of theTDL-Com layer is statically pre-defined beforeruntime. The activities are cyclically repeated.TDL-Com PrototypeImplementation on Top of CANThe prototype implementation of the TDL-Com usestime-triggered CAN (TTCAN) [17] ascommunication protocol. TTCAN extends the CANprotocol by providing time-triggered communicationvia the standard physical CAN link. CAN [18] is amature standard and widespread in the field ofautomation as well as in the automotive field.We implemented a proprietary version of theTTCAN protocol in software which we call softwareTTCAN (sTTCAN). The reasons were as follows:(1) It allows for adaptation to the needs of theTDL-Com prototype implementation, (2) there wereno embedded boards equipped with TTCANcontrollers available, (3) the TTCAN chip was stillin evaluation status, (4) lack of a supporting toolchain. A benefit of the TTCAN softwareimplementation is that it can be run on standardembedded boards and ECUs without the need tochange or modify the hardware to profit fromreliable, time-triggered communication. Like theTTCAN implementation by Bosch [19], we supporttransmitting sporadic messages within dedicatedtime slots. The objective of our implementation is toprovide an implementation for the OSEK/VDXoperating system.The TDL-Com implementation bases on theMotorola MPC 555 Power PC derivate and theOSEK/VDX operating system. The processor chipalready integrates a variety of common I/O (e.g., twoCAN controllers). KANIS OAK EMUF boards,hosting the Motorola MPC 555 processor, are thetarget hardware for our implementation. Eachnetwork node consists of one KANIS OAK EMUFboard, the boards are interconnected via a CAN buslink. Each board features two physical CAN linksincluding bus drivers and on-board connectors.The prototype is implemented under OSEK/VDXusing OSEKWorks, an OSEK implementation byWindRiver and the development environmentsupplied by WindRiver [20].Figure 6. Overview TDL-Com tool chain8 of 12

5.1Clock SynchronizationTime-triggered communication requires asynchronized time base among the participatingnodes to provide a time division multiple access(TDMA) bus arbitration scheme. TDMA allows fora number of nodes to access a single transmissionchannel without interference by allocating uniquetime slots to each node within each channel.The sTTCAN implementation of clocksynchronization is inspired by TTCAN [17].TTCAN uses a master-slave clock synchronizationscheme based on the idea of the TTP/A fireworksprotocol [21]. A dedicated station, the master,periodically sends a synchronization frame s, whichother nodes use to synchronize their local clocks tothe clock of the master.We describe the clock synchronization algorithmusing the notation of event-recording automata [22].Timed automata have been introduced to model thebehavior of real-time systems. They augment finitestate automata with a set of clocks. Event-clockautomata are timed automata that correlate the valueof clocks and the occurrence of events and maintaintheir correspondence.An automaton consists of a set of locations V and afinite set of event-recording clocks X. We write xlto denote that the clock x X is assigned to alocation l V . The infinite wordw (s, t0 )(s, t1 ) . . . is emitted by the master nodeand is the input to the automaton that describes theclock synchronization system. p1i , with anequidistant period pi ti ti 1 for all i is thefrequency of the occurrence of s. The content ofmessage s is the value of the clock of the masternode at times t0 , t1 , . . . We write xls to denote thetime of the clock x at location l at reception ofsynchronization message s of the master node. Wewrite xms to denote the value of the clock of themaster node at the time when sending the message s.At reception of message s at location l, we computethe deviation dls between the value of the masterclock xms and the value of the clock at location l (weassume an instantaneous transmission of messages):dls v(xls ) v(xms ).Clock values are measured in ticks. They are splitinto a macro tick and a micro tick part. We write9 of 12hv(x)i to denote the micro tick part of a clock andbv(x)c to denote the macro tick part of v(x), suchthat v(x) bv(x)c hv(x)i. Macro ticks tM arecounted in our implementation in OSEK systemtimer ticks. A macro tick is composed of a specificnumber m of micro ticks tm (CPU timer ticks), suchthat tM m0 · tm at time t0 .To synchronize a clock xls , the nominal number ofmicro ticks for every macro tick is increased to slowdown the clock and it is decreased to speed it up. Tocompensate for a clock deviation of dls in period piin the next synchronization period, the number ofmicro ticks tmi ,corr need to be added to the nominalnumber of micro ticks that make up a macro tickduring the next period pi 1 . tmi ,corr is computed bydividing the deviation by the length of period pi .The sign(dls ) indicates a positive or a negative clockdeviation.tmi ,corr sign(dls )dls.bpi cThe number of micro ticks that make up a macrotick for the next period pi 1 is computed astM (mi 1 tmi ,corr ) tm .with mi mi 1 tmi ,corr . tmi ,corr is usually afraction number and the value is split into an integraland a fractional part. The integer part is immediatelycorrected as shown above. The fraction part isaccumulated and corrected within the current periodby adding or subtracting one more tm as soon as theabsolute sum exceeds 1.Valuation of clocks uses the time stampingmechanism of the MPC555 CAN controller. Itautomatically generates a time stamp at the time ofstart of frame [18] of an incoming message. Theusual method of generating time stamps withinterrupts is imprecise because of interrupt latencies.Furthermore, using interrupt service routines togenerate the time stamps causes delay of runningtasks and increases CPU load, which mightnegatively influence the temporal predictability ofthe system.With th

distributed systems in the time and value domain, the system must be made deterministic. Being deterministic is mandatory for software systems that control physical systems that are ruled by deterministic physical laws [4]. † Non-determinism.A non-deterministic system does not have a unique output sequence to a given input sequence. An .

Related Documents:

Bruksanvisning för bilstereo . Bruksanvisning for bilstereo . Instrukcja obsługi samochodowego odtwarzacza stereo . Operating Instructions for Car Stereo . 610-104 . SV . Bruksanvisning i original

10 tips och tricks för att lyckas med ert sap-projekt 20 SAPSANYTT 2/2015 De flesta projektledare känner säkert till Cobb’s paradox. Martin Cobb verkade som CIO för sekretariatet för Treasury Board of Canada 1995 då han ställde frågan

service i Norge och Finland drivs inom ramen för ett enskilt företag (NRK. 1 och Yleisradio), fin ns det i Sverige tre: Ett för tv (Sveriges Television , SVT ), ett för radio (Sveriges Radio , SR ) och ett för utbildnings program (Sveriges Utbildningsradio, UR, vilket till följd av sin begränsade storlek inte återfinns bland de 25 största

Hotell För hotell anges de tre klasserna A/B, C och D. Det betyder att den "normala" standarden C är acceptabel men att motiven för en högre standard är starka. Ljudklass C motsvarar de tidigare normkraven för hotell, ljudklass A/B motsvarar kraven för moderna hotell med hög standard och ljudklass D kan användas vid

LÄS NOGGRANT FÖLJANDE VILLKOR FÖR APPLE DEVELOPER PROGRAM LICENCE . Apple Developer Program License Agreement Syfte Du vill använda Apple-mjukvara (enligt definitionen nedan) för att utveckla en eller flera Applikationer (enligt definitionen nedan) för Apple-märkta produkter. . Applikationer som utvecklas för iOS-produkter, Apple .

Distributed Database Design Distributed Directory/Catalogue Mgmt Distributed Query Processing and Optimization Distributed Transaction Mgmt -Distributed Concurreny Control -Distributed Deadlock Mgmt -Distributed Recovery Mgmt influences query processing directory management distributed DB design reliability (log) concurrency control (lock)

triggered real-time protocols, such as CAN, and ARINC 629. Chapter 8 presents a new class of real-time communication protocols, the time-triggered protocols, which have been developed at the author at the Technische Univers ität Wien. The time-triggered protocol

College/ Department: Keshav Mahavidyalaya, University of Delhi . Sequential Circuits-II 2 Institute of Lifelong Learning, University of Delhi . 4.2.1 Edge Triggered Flip-flop 4.2 Edge Triggered S-R Flip-flop 4.3 Edge Triggered D Flip-flop 4.4 Edge Triggered J-K Flip-flop 4.4.1 Racing 4.4.2 J-K Master Slave Flip-flop 4.5 Asynchronous Preset .