A Framework For Partial Data Flow Analysis*

1y ago
28 Views
3 Downloads
1,016.16 KB
10 Pages
Last View : 21d ago
Last Download : 2m ago
Upload by : Bennett Almond
Transcription

A Framework for Partial Data Flow Analysis*Fbjiv Gupta and Mary Lou Soffa{gupt a,soffa}@cs.pit t .eduDepartment of Computer ScienceUniversity of PittsburghPittsburgh, Pa, 15260of many software tools, such as editors [20], debugsoftware testers [3, 9, 191 program integraand parallel program analyzers [2, 41. Databeen proven t o be especially useful in toolsfor the maintenance stage [6, 81. Although compilersand software tools utilize static analysis to improvetheir capabilities and performances, there are important differences in the data flow information neededbetween these two classes of software.Compilers require information about the flow ofdata for an entire program, as global optimizationsare typically applicable to all code in the program. Assuch, data flow information is computed exhaustivelyusing the traditional data flow framework [12] and iscomputed before optimizations are applied. The typesof data flow or data dependency information neededare based on the kinds of optimizations and parallelizing transformations to be applied and are thus knownbeforehand. And lastly, optimizations are applied inmany cases after a program has been debugged. Thus,the data flow computation is not really designed toeasily incorporate changed program text.On the other hand, data flow needed by many software tools is demand driven from one or more programpoints rather than exhaustive. For example, when debugging, we may want to know what data values reacha use at a program point or what statements impacton the value of a variable at a program point. In dataflow testing, after a change has been made in a program, we want to know the impact that change has onthe data values that it can reach. Thus, software toolsare interested in the flow of data from program points.Also, the data flow problems to be solved are not fixedbefore the software tool executes but can vary depending on the user. For example, a t a program pointduring debugging, a user may want to ask such questions as will a value reach a point along any path andwhat value must reach a point along all paths, as wellas other questions that would help locate bugs. Andlastly, many tools are used while the program is underdevelopment or maintenance and thus changes in theprogram are expected and must be efficiently handled.Thus, exhaustive data flow information is needed incompilers whereas the data flow needed in a numberof software tools is demand driven. The types of dataflow needed is fixed for compilers but not for softwaretools, and the software tools need to efficiently respondto program changes. Although these basic differencesAbstractAlthough data pow analysis was first developed foruse in compilers, i t s usefulness is now recognized i nmany software tools. Because of its compiler origins, the computation of data pow for software toolsis based on the traditional exhaustive data flow framework. However, although this framework is useful forcomputing data pow for compilers, it is not the mostappropriate for sofsware tools, particularly those usedin the maintenance stage. In maintenance, testing anddebugging is typically performed in response to program changes. As such, the data pow required is demand driven from the changed program points. Ratherthan compute the data flow exhaustively using the traditional data flow framework, we present a frameworkf o r partial analysis. The framework includes a specification language enabling the specification of the demand driven data flow desired by a user. From thespecification, a partial analysis algorithm is automatically generated using an L-attributed definition for thegrammar of the specification language. A specificationof a demand driven data pow problem expresses characteristics that define the kind of traversal needed inthe partial analysis and the type of dependencies t o becaptured. The partial analyses algorithms are eficientin that only as much of the program is analyzed asactually needed, thus reducing the time and space requirements over exhaustively computing the data flowinformation. The algorithms are shown to be usefulwhen debugging and testing programs during maintenance.Keywords - control p o w graph (CFG), program debugging, program testing, code optimization.1IntroductionStatic program analysis was first developed in theearly 70s for use in compiler optimizations, recognizing that knowledge about the flow of data values in aprogram leads to better register allocation and morerun-time efficient code. Its use in parallelizing compilers is invaluable, as code must be transformed using data dependency information in order to fully exploit the parallel architectures [5, 171. In addition,static analysis has also become a primary component*Partially supported by National Science Foundation Presidential Young Investigator Award CCR-9157371 and GrantCCR-9109089to the University of Pittsburgh,0-8186-6330-8/94 04.00 0 1994 IEEE4

have the data flow computed exhaustively and thenselects the information to present to the user, usingthe program dependence graph. Also, slices using theprogram dependence graph are defined only from program points where values are used; for example, indebugging, more flexibility is needed for we may wanta slice from a program point where a variable is notused.The next section of the paper discusses the characteristics and specification that we include in our framework for demand driven data flow for software tools.The technique for the automatic generation of partialanalysis algorithms is presented in section 3. Section4 demonstrates our technique through the specification of various partial data flow algorithms useful indebugging, testing, and test case generation. Section5 considers the related work and a discussion of animplementation is included in section 6.exist in the data flow requirements for compilers andsoftware tools, exhaustive algorithms derived from thedata flow framework are typically used to compute thedata flow for software tools. This approach causes thecomputation of data flow information about parts ofa program that is not required by the data flow problem. When changes are made t o code, the data flowhas to be recomputed exhaustively and compared toprevious data flow or has to be incrementally updated,under the assumption that exhaustive data flow hasalready been computed [I, 18, 221. A major problemwith computing data flow information exhaustively isthe high cost both in execution time and memory demands. Experimental studies show that performinganalyses even over small or medium size programs cantake several hours [13].In order to provide more flezibzlity and eficiencyin the data flow computation for software tools, wepresent a framework for the computation of demanddriven data flow using partial analysis algorithms. Asthis framework supports the computation of demanddriven data flow from a program point, only the partof the program required for analysis is used to computethe data flow information. The framework is general inthat many types of demand driven data flow problems,needed for software tools, can be expressed and computed. And lastly, a specification technique is includedwith the framework that enables the specification ofdata flow problems and the automatic generation ofalgorithms to perform the partial analysis. With thisfacility, the user is provided with a model for dataflow problems and can express the particular problemof interest in the specification language. Using ourframework, whenever data flow information with particular characteristics is required, the user only hast,o write a short specification identifying these characteristics. The characteristics identify the type oftraversal through the program that is needed in theanalysis and the dependencies required. Besides thespecification technique, the framework also containsan L-attributed definition of the grammar for the specification language to actually generate the appropriatepartial analysis algorithms. The framework is flexiblein that additional characteristics of data flow problemscan be easily added. We demonstrate the frameworkfor a set of characteristics derived from common dataflow problems for software tools. A prototype has beenimplemented and the utility and efficiency of the partial analysis algorithms in testing and debugging isexamined.A partial analysis algorithm produced by our technique is efficient in that the analysis is controlled bythe dependencies being sought. Only nodes that mustbe visited to compute the required dependencies arevisited, i.e., the complexity grows with size of the(partial) solution that is computed. Thus, we arecomputing less data flow information which resultsin both space and time efficiencies. Our algorithmsuse the control flow graph as the program representation. Another type of representation that has beenused to compute static program slice, a type of demand driven data flow, is the program dependencegraph [5, 111. However, this representation needs to2Characteristics and Specification ofDemand Driven Data FlowWe begin by identifying general properties of demand driven data flow. Demand driven data flow isdefined from a program point or set of points. Theend of a program is a valid program point, indicatingthat data flow information is required about the entire program or all possible execution paths. Demanddriven data flow captures the data dependencies relative to a program object, such as a set of variables orstatements.Assume that we want to know where a newly introduced definition of a variable at a program pointmay be used (i.e., partial reachable uses) in def-usetesting of a program. In this case, a forward traversal must be made from a definition searching for thedependencies that can exist along any path from thegiven program point. Since interest is only in the dependence between a definition and its use, only directdependencies are required. When a use of a variableis found, the statement is added to the data flow setand the search continues. The search for a use of avariable along a. path ends when another definition ofthat variable is found. During the search, information reflecting the data flow information being soughtis propagated. The identifying characteristics are that(i) the search is forward from a program point, (ii) usesare needed to identify dependencies, (iii) only immediate dependencies are required, and (iv) dependenciesfound along any path are needed.Consider a different type of demand driven dataflow problem, that of computing information usefulduring program debugging. Given a variable use, wewant to know the locations of all definitions on whichthe use is directly or indirectly flow dependent. Ifany of these definitions are not constants, then another search must be established to determine theirdefinitions, or the closure of the dependencies. Thus,the set of variables whose dependencies are requiredchanges as data flow information is added to the computed data flow set. When a statement is added to thedata flow set, other variables’ dependencies must befound and these variables are added to the set of variable definitions being searched. Thus, variables whose5

data dependencies are required to produce the neededinformation are both propagated and spontaneouslyany one incoming edge. Similarly for a backward prob-lem the user must specify whether from the statementsbeing searched, a given pro ram point can be reachedalong a l l outgoing edges ofthe statements or a t leastalong any one outgoing edge. In the case of variabledependencies the user must specify whether the searchis being carried out for definitions (dei) of variablesor uses (use) of variables.The user first specifies a demand driven data flowproblem using the DDConstruct specification given inFig. 1. This specification associates a name witha partial analysis algorithm at the time the code isconstructed. When a particular data flow computation is needed using the constructed partial analysis algorithm, the starting point is specified usingthe DDCompute specification. After the data flowhas been computed, the name assigned to the computed set enables the user to access the items. Whenthe construct statement shown below is encountered,an algorithm for computing definitions that reachuses of variables at various program points is constructed from the specification. T h e execution of thecompute statement causes this algorithm to determinethe reaching definitions for the given starting point inthe program specified in VInput.are needed to identify deand (iv) each path from the given pro ram point specificed in the criterion is searched for %ependencies.The two types of data flow dependencies providedin our framework are variable dependencies that relatedefinitions and uses of variables and statement dependencies that compute structural relationships amongthe statements (e.g., dominators). To specify a starting point for a variable computation the user specifiesa variable of interest at a program point. In case ofstructural dependencies the user specifies statementsof interest a t a program point.DDConstruct - Construct name : DDSpecifyDDSpecify - Variable I StatementVariable - Vdep Search ReferenceStatement -- Sdep SearchSearch - Direction Extent PathDirection forward I backwardEztent - immediate I closurePath - all I anyReference def I useConstruct ReachingDefs:Vdep backward immediate any defCompute ReachingDefs :- ( VInput )DDComputeCompute name :- (DDStart)DDStart - VInput I SInputDDOutput - VOutput 1 SOutputProgPoznt - in 1 before I afterVInput -* variable ProgPoint statement{ , variable ProgPoint statement )*VOutput -- { (variable ProgPoint statement, statement) }*Slnputstatement {, statement } *SOutput - { (statement, statement) }*-- 3Generating Partial Analysis AlgorithmsNext we describe the construction of a partial analysis algorithm from its specification. The constructionof the algorithm is carried out during the parsing of thedemand driven data flow Specification. The actions required for constructing the algorithm are described byan L-attributed definition associated with the grammar in Fig. 1. An L-attributed grammar allows theuse of synthesized attributes as well as restricted typesof inherited attributes. The values of these attributesrepresent the characteristics of the data flow as well asthe code for the partial data flow algorithm. Due tospace consideration in this abstract, we make a simplifying assumption in the following discussion. Although array variables can be handled by our technique, we only discuss scalar variables. The primarymodification for array variables is the terminating condition. In general, the search for array variables hasto progress to the start of the program in a forwardsearch or the end of a program in a backward search.The nature of the search to be carried out during partial analysis is described by the synthesizedattributes Order, Prev, Next, Ext, and Heet associated with the symbol Search see Fig. 2). T h e attribute Order describes the or er in which nodes areexamined during the computation of data flow, the attributes Next and Prev identify the direction in whichthe flow graph is traversed during the computation ofthe data flow, Ext represents the scope of the search,and the meet operator specifies whether the information must be obtainable along all paths or at least- Figure. 1: Specifying the Construction andComputation of Demand Driven Data Flow.To specify demand driven data flow computationsof different types we must specify the characteristicsthat describe the nature of the search that will enablethe capture of relevant information by the data flowcomputation. In Fig. 1, we present the grammar forour specification language. The specification for thenature of search includes the direction in which thesearch is to take place. The search can be carried outin either the f orward or backward direction from theprogram point specified by the user. The extent of thesearch indicates whether the search should terminatewhen all statements that have a direct relationship(immediate) with the criterion have been found orwhether it should continue until all statements thathave direct or indirect (closure) relationship withthe criterion have been found. For a forward problemthe user must indicate whether the statements beingsearched for are reachable from a given program pointalong all incoming edges of the statements or at least66

The propagated variables are those variables that aresimply propagated through the statement. Thus, thevariables in the Poutput set are those that were alsopresent in the Input set of the statement and wereneither killed nor generated by the statement.The statement n represents the statement currentlybeing examined by the algorithm. This statement isobtained from the worklist of statements that is to beexamined by the partial analysis algorithm. The statement n is examined for the definitions/uses of variablesthat meet the required dependencies. If no dependencies are found in a statement, the search must progressalong the successors/predecessors of this statement.However, if the definition/use of a variable along apath is found that satisfies the dependency, or it isdetermined that none will be found along the current path, then the search for that variable along thatpath terminates. The search for other variables in thelist, if any, continues. The continuation of the searchalong predecessor/successor statements is achieved byincluding these statements in the worklist. If the definitions/uses for all variables along n have been found(i.e., Output[n] 4), no new statements would beadded to the worklist.For statement dependencies, the data flow sets contain statements of interest to the user. The statementsare propagated as far as they can be propagated under the user specified search characteristics. Unlikevariable dependencies, definitions of variables cannotstop the propa.gation of statements since we are simplyinterested in structural relationships among the statements and not the data dependence relationships. Incase of closure if a statement is included in the da.taflow set then it is also used to construct a new criteria(see Fig. 4).Finally the code for capturing relevant information must be embedded in a loop, shown in Fig. 5,which examines the statements in the worklist one ata time. The worklist is initialized based upon the input program points of the criterion. At any givenpoint in the algorithm the worklist indicates how farthe search has progressed. All of the elements in thislist are examined for the relevant information and theworklist is appropriately updated. If all relevant information a.long all paths being searched has beenfound, no new stat,ementswould be added to the worklist and it becomes empty. At this point the dataflow set has been computed and the algorithm termina.tes. The statements in the worklist are orderedin dept,h-first/reverse-depth-first order and this ordering is maintained as new statements are added to theworklist. This improves the efficiency of the algorithmby reducing the number of times a statement is examined. The code tha.t represents the partial analysisalgorithm is found in the synthesized attribute Codeof the symbol DDSpecify.We only described the actions that construct thecore of the partial analysis algorithm. Additional attributes that initialize the Input, Soutput, Poutput,and Output sets and the worklist at the the start ofthe algorithm are omitted.any one path. The Order is depth-first for forwarddata flow problems and is reverse depth-first for backward data flow problems. In the case of a forwardsearch the Prev set of a statement is the set of thestatement’s predecessors (Pred) and the Next set isthe statement’s successors (Succ) in the control flowgraph. In the case of a backward dat aflow problem it is the opposite. The Meet operator is union forany path problems and intersection for all path problems. An additional attribute Point is used duringthe construction of new criteria that are added during the computation of the demand driven data flowwhen the Extent of the search is specified as closure.Its value is @ e r for forward problems and before forbackward problems.The properties of the data flow problem determinedfrom this specification are described by the attributesFound and New of the symbol Reference. The attributeExt is inherited by Reference. The Found at,tribut,eidentifies the set, associa.ted with a t atementwhich isexamined to determine whet,her the st,atement shouldbe included in the data flow set. Thus, it is the stat,ement’s reference set ( R e f ) , if we are looking for uses ofvariables (use) and it is the statement’s definition setDe , if we are looking for the definitions of variables. The attribute New identifies the set specifyingthe additional information regarding a sta.tement thatthe partial analysis algorithm must find a.fter a statement h a s been included in the data flow set. Thus, itis used when the extent of the search is specified asclosure. The ext,ent of the search can be determinedby examining t,he inherit.ed attribut’e Ext a.ssocia.t,edwith Reference. If we are looking for uses of a va.riable(use), then the New set is the Def set of the statementjust included in the data flow set. However, if we arelooking for definitions (def‘), then the New set is theRef set of the statement included in the set.Let us next consider the computation of variabledependencies. The code that computes data flow isconstructed from the attributes of the data flow problem associated with the symbols Search and Aeference(see Fig. 3 ) . The resulting code is represented by thevalue of the attribute Code associated with Variable.In the code for the value of attribute Code, the setsInput and Output represent da.ta flow sets associatedwith each statement maintained to assist i n capturing data dependencies. These sets cont,ain names ofvariables, a.nd their associated program points , whosedefinitions/uses are being searched. I n the case of aforward data flow problem, the Input. set contains variables of interest at the point just before a statementand the Output set contains relevant variables afterthe statement. In the case of a backward data flowproblem it is the opposite. The variables in the Output set of a statement consist of sponfmeous varia.bles(Soutput) and propagated variables (Poutput). Thesponta.neous variables are those variables which areincluded in the set due to the statement itself, thatis, they are generated by the statement. Thus, thevariables in the Soutput. set of a stat,ement s are t,hosevariables which are being considered due to the inclusion of s i n the da.t,aflow set or because s is one of t.hestatements that is specified as part of t.he criterion.GI7

Search- Direction Eztent Path [#Search.Order #Direction.Order;#Search.Prev #Direction.Prev;#Search.Next #Direction.lext; #Search.Ext #Extent.Ext;#Search.Keet #Path.neet;#Search.Point #Direction.Point;]Direction- forward [ #Direction.Order "Depth-First";#Direction.Prev "Pred";#Direct ion.Next "Succ"; #Direction.Point "after" ]I backward [ #Direction.Order "Reverse-Depth-First"; #Direction.Prev "Succ";#Direction.Bext "Pred"; #Direction.Point "before" ]ExtentPath immediate all[ #Extent .Ext "Immediate" ]I closure [ #Extent .Ext "Closure" ][ #Path.Keet "n" ] I any [ #Path.Keet "U" ]def [ #Reference.Found "Def';#Reference .New if #Reference , Ext "Closure" then "Ref' else "" ]I use [ #Reference.Found "Ref'; #Reference.New if #Reference .Ext "Closure" then "Def' else ")' ]Reference---iFigure 2: Search Attributes.Variable Vdep Search Reference [- point is in, before or after#Reference.Ext #Search.Ext#Variable.Code {NewInput #Search.Heet,E#Search.p e"(n) Output[s]If NewInput # Input[n] ThenInput[n] e NewInputFOUND e {(v point s) 3 (v point s)EInput[n] and vE#Reference.Found[n]}KILL {(v point s) 3 (v point s)EInput[n] and v Def(n)}If FOUND # ThenFor each (v point s) E FOUND Do DD - DD U {(v point s, n)} EndforSoutput[n] - Soutput[n] U {(v #Search.Point n) 3 vE#Reference.New(n)}EndifElse Poutput[n] t Input[n] - KILL EndifOutput[n] - Soutput[n] U Poutput[n]IfOutput[n] # 4 ThenFor each s E #Search.Next(n) D o Worklists #Seach,Order Worklist EndforEndifEndif } ]-Figure 3: Slices f o r Variable Dependences.Statement Sdep Search [#Statement.Code {NewInput - #S earch.Meet,E #s earch.Prev(n)Output [s]If NewInput # Input[n] ThenInput[n] Poutput[n] e NewInputIf Input[n] # 4 ThenDD -- DD U {(s,n) 3 s E Input[n]};If #Search.Ext "Closure" Then Soutput[n]Soutput[n] U {n} EndifEndifOutput[n] -Soutput[n] U Poutput[n];If Output[n] # 4 ThenFor each s E #Search.Bext(n) D o Worklist t s #Sezch,Order Worklist EndforEndifEndif } ] --Figure4:Slices f o r Statement Dependences.8

DDSpecify- Variable I Statement [#DDSpecif y .Code {BeginDD - 4;Whileworklist # 4 Doget n from the head of Worklist;#Variable.Code; 1 #Statement.CodeEndwhilereturn(DD)End11Figure 5: Slice Computation Loop.Algorithm ComputeDataSlice ( ProgramPoints )Input: Program flow graph;Output: SLICE;Declare: Worklist: ordered list of statement nodes;Soutput[n], Poutput[n], Output[n], Input[n], NewInput : set of variablesBeginSLICE 4For each statement n in the program flow graph DoSoutput[n] Poutput[n] Output[n] Input[n] 4;EndforFor each ”v before s” E PrograniPoints DoFor each node n E Pred(s) DoSoutput[n] Soutput[n] U {v}; Output[n] Output[n] U {v};EndforWorklist - s d e p t h - f z r J t WorklistEndforFor each ” v after s” E ProgramPoints DoSolltput[s] Soutput[s] U {v}; Outpnt[s] Output[s] U {v};For each node n E Succ(s) DoWorklistt i depth-flrst WorklistEndforEndforWhile Worklist # d Doget T I from the head of WorklistNewInputUJEpredOutput[s]If NewInput # Input[n] ThenInput[n] - NewInputIf Input[n] n Ref[n] # r# ThenSLICESLICE U (11)I’output[n]Input[n] - Def(n)Output[n] - Soutput[n] U Poutput[n]Else Output[n] - Soutput[n] U Input[n] EndifIf Output[n] # ThenFor each s E Succ(n) Do Worklist s d e p t h - f i r s t Worklist aSlice-- --Figure 6. Algorzthm Constructed from the speczficat2on Construct Dataslice: forward immediate any use;9

after Sl) because the information was computed usingthese criteria. This illustrates how the computation ofa partial data flow set is the computation of only thedata flow information requested by the user.4ApplicationsDemand driven data flow information has been usedto assist i n the debugging and testing of programs. Inthe following sections, we give examples for debugging,test,ing and test case generation.4.1NodeWorklistDemand driven data st Case Generation and RegressionTestingIn order to reduce the number of test cases generated when retesting a program after changes usingdata flow testing, statically determinable propertiesof a program, namely postdominance and dominance,can be used to guide the test case generation process[e, 71. Although these properties can be efficiently determined exhaustively, program changes would requirerepeated computations of the properties. Our partialanalysis can be used to avoid these computations after pr0gra.m changes. In particular, postdominance isa property that is used to group def-use pairs in anat,t,einpt to find, if possible, a test case that satisfiest,he entire group of def-use pairs. Since we need onlythe post.dominatoi-s of a group of def-use pairs, andnot all of the def-use pairs in the program, we use apartial postdominator algorithm to produce postdominators for the selected group of def-use pairs. Thus,after the def-use pairs have been identified as requiringtest cases, partial analysis is performed to determinea.11 the state1nent.s that postdominate def-use pairs ineach group. Postdominator information can be obtained using statement dependencies. A statement S Iis said to postdominate another statement SZ if allpat,lis from S2 to the end of the program pass throughSI. T h e final node in a. flow graph postdominatesall nodes in the flow graph. The following statementdependency specification computes all nodes that areposttdominat.ed by st.atement S, and the postdominat,or relationships among those nodes. The search thatis carried out is backward and the problem is an allpaths problem.{(Y after SI, SZ)}{ ( Y after SI, S?),(X after SI, S3))((1’after SI, S?),(S after SI, S3)}{(Y after S1, SL),(S after S I , S3)){(YafterS1. S?),(S after S1, S3),( Y after SI, S I ) )S6d{(Y after S I , S?),(X after SI,S3),( Y after S1, S l ) , ( Y after S1, SG))Figure 7. Computing fotnmtd vrtrictble dependencies:Compute DD :- ( X after S 1 , Y a f t e r S 1 1.Construct PDOMS: Sdep backward closure allCompute PDOMS :- ( S )The applicat.ion of the part,ial analysis algorit.hmconstructed for a sa.mple pr0gra.m flow graph is illustrated in Fig. 7. Tlie demand driven d a h flow iscomputed for variables X and Y a.ft.er stat,ement S1.The set, Sout,put.[Sl]is init,ialized t,o {(X aft.er Sl),(Yafter SI)}. The worklist. is inihlized tmothe successor set of S1, that is, {S2,S3}. The search for usesof X and Y cont,inues dong t,he pa.t.lis from S2 andS3. Since S2 uses E’ and S3 uses S t.liey are bothincluded in the data flow set. A s the search cont.iuues further the st,atement nodes for S4, S5, SI, andSG are examined. This lea.ds to the detection of usesof Y by SG a.nd S1. I n Fig. 7 t,he worklist and theda.ta flow set &er the e x a m i i d o n of each additionalnode is shown. Also t,he Input a.nd Output. sets of allthe nodes are shown when t.he algorit.liin t,erminates.These sets at most coiit,ain bot,li (X aft.er S I ) a.nd (YAs pairs are sa.tisfied, partial analysis continues toreduce the size of the program considered. During thegeneration of a test case, for a group of def-use pairsthat can be potentially teste

in the maintenance stage. In maintenance, testing and debugging is typically performed in response to pro- gram changes. As such, the data pow required is de- mand driven from the changed program points. Rather than compute the data flow exhaustively using the tra- ditional data flow framework, we present a framework for partial analysis.

Related Documents:

Bruksanvisning för bilstereo . Bruksanvisning for bilstereo . Instrukcja obsługi samochodowego odtwarzacza stereo . Operating Instructions for Car Stereo . 610-104 . SV . Bruksanvisning i original

10 tips och tricks för att lyckas med ert sap-projekt 20 SAPSANYTT 2/2015 De flesta projektledare känner säkert till Cobb’s paradox. Martin Cobb verkade som CIO för sekretariatet för Treasury Board of Canada 1995 då han ställde frågan

service i Norge och Finland drivs inom ramen för ett enskilt företag (NRK. 1 och Yleisradio), fin ns det i Sverige tre: Ett för tv (Sveriges Television , SVT ), ett för radio (Sveriges Radio , SR ) och ett för utbildnings program (Sveriges Utbildningsradio, UR, vilket till följd av sin begränsade storlek inte återfinns bland de 25 största

Hotell För hotell anges de tre klasserna A/B, C och D. Det betyder att den "normala" standarden C är acceptabel men att motiven för en högre standard är starka. Ljudklass C motsvarar de tidigare normkraven för hotell, ljudklass A/B motsvarar kraven för moderna hotell med hög standard och ljudklass D kan användas vid

LÄS NOGGRANT FÖLJANDE VILLKOR FÖR APPLE DEVELOPER PROGRAM LICENCE . Apple Developer Program License Agreement Syfte Du vill använda Apple-mjukvara (enligt definitionen nedan) för att utveckla en eller flera Applikationer (enligt definitionen nedan) för Apple-märkta produkter. . Applikationer som utvecklas för iOS-produkter, Apple .

och krav. Maskinerna skriver ut upp till fyra tum breda etiketter med direkt termoteknik och termotransferteknik och är lämpliga för en lång rad användningsområden på vertikala marknader. TD-seriens professionella etikettskrivare för . skrivbordet. Brothers nya avancerade 4-tums etikettskrivare för skrivbordet är effektiva och enkla att

Den kanadensiska språkvetaren Jim Cummins har visat i sin forskning från år 1979 att det kan ta 1 till 3 år för att lära sig ett vardagsspråk och mellan 5 till 7 år för att behärska ett akademiskt språk.4 Han införde två begrepp för att beskriva elevernas språkliga kompetens: BI

**Godkänd av MAN för upp till 120 000 km och Mercedes Benz, Volvo och Renault för upp till 100 000 km i enlighet med deras specifikationer. Faktiskt oljebyte beror på motortyp, körförhållanden, servicehistorik, OBD och bränslekvalitet. Se alltid tillverkarens instruktionsbok. Art.Nr. 159CAC Art.Nr. 159CAA Art.Nr. 159CAB Art.Nr. 217B1B