A Proposed Framework For Developing Quality Assessment Tools

1y ago
3 Views
1 Downloads
615.45 KB
9 Pages
Last View : 2m ago
Last Download : 3m ago
Upload by : Hayden Brunner
Transcription

Whiting et al. Systematic Reviews (2017) 6:204DOI 10.1186/s13643-017-0604-6METHODOLOGYOpen AccessA proposed framework for developingquality assessment toolsPenny Whiting1,2* , Robert Wolff3, Susan Mallett4,5, Iveta Simera6 and Jelena Savović1,2AbstractBackground: Assessment of the quality of included studies is an essential component of any systematic review.A formal quality assessment is facilitated by using a structured tool. There are currently no guidelines available forresearchers wanting to develop a new quality assessment tool.Methods: This paper provides a framework for developing quality assessment tools based on our experiences ofdeveloping a variety of quality assessment tools for studies of differing designs over the last 14 years. We have alsodrawn on experience from the work of the EQUATOR Network in producing guidance for developing reportingguidelines.Results: We do not recommend a single ‘best’ approach. Instead, we provide a general framework withsuggestions as to how the different stages can be approached. Our proposed framework is based around three keystages: initial steps, tool development and dissemination.Conclusions: We recommend that anyone who would like to develop a new quality assessment tool follow thestages outlined in this paper. We hope that our proposed framework will increase the number of tools developedusing robust methods.Keywords: Risk of bias, Systematic reviews, QualityBackgroundSystematic reviews are generally considered to providethe most reliable form of evidence for decision makers[1]. A formal assessment of the quality of the includedstudies is an essential component of any systematicreview [2, 3]. Quality can be considered to have threecomponents—internal validity (risk of bias), external validity (applicability/variability) and reporting quality. Thequality of included studies depends on them being sufficiently well designed and conducted to be able to provide reliable results [4]. Poor design, conduct or analysiscan introduce bias or systematic error affecting studyresults and conclusions—this is also known as internalvalidity. External validity or the applicability of the studyto the review question is also an important componentof study quality. Reporting quality relates to how wellthe study is reported—it is difficult to assess other* Correspondence: penny.whiting@bristol.ac.uk1NIHR CLAHRC West, University Hospitals Bristol NHS Foundation Trust,Bristol, UK2School of Social and Community Medicine, University of Bristol, Bristol, UKFull list of author information is available at the end of the articlecomponents of study quality if the study is not reportedwith the appropriate level of detail.When conducting a systematic review, strongerconclusions can be derived from studies at low risk ofbias, rather than when evidence is based on studies withserious methodological flaws. Formal quality assessmentas part of a systematic review, therefore, provides an indication of the strength of the evidence on which conclusions are based and allows comparisons betweenstudies based on risk of bias [3]. The GRADE system forrating the overall quality of the evidence included in asystematic review is recommended by many guidelinesand systematic review organisations such as NationalInstitute for Health and Care Excellence (NICE) andCochrane. Risk of bias is a key component of this alongwith publication bias, imprecision, inconsistency, indirectness and magnitude of effect [5, 6].A formal quality assessment is facilitated by using astructured tool. Although it is possible for reviewers tosimply assess what they consider to be key componentsof quality, this may result in important sources of bias The Author(s). 2017 Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, andreproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link tothe Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication o/1.0/) applies to the data made available in this article, unless otherwise stated.

Whiting et al. Systematic Reviews (2017) 6:204being omitted, inappropriate items included or too muchemphasis being given to particular items guided byreviewers’ subjective opinions. In contrast, a structuredtool provides a convenient standardised way to assessquality providing consistency across reviews. Robusttools are usually developed based on empirical evidencerefined by expert consensus.This paper provides a framework for developingquality assessment tools. We use the term ‘quality assessment tool’ to refer to any tool designed to targetone or more aspects of the quality of a researchstudy. This term can apply to any tool whetherfocused specifically on one aspect of study quality(usually risk of bias) or for broader tools coveringadditional aspects such as applicability/generalisabilityand reporting quality. We do not place any restrictions on the type of ‘tool’ to which this frameworkcan be approach—it should be appropriate for a variety of different approaches such as checklists,domain-based approaches, tables or graphics or anyother format that developers may want to consider.We do not recommend a single ‘best’ approach.Instead, we provide a general framework with suggestions on how the different stages can be approached.This is based on our experience of developing qualityassessment tools for studies of differing designs overthe last 14 years. These include QUADAS [7] andQUADAS-2 [8] for diagnostic accuracy studies,ROBIS [9] for systematic reviews, PROBAST [10] forprediction modelling studies, ROBINS-I [11] for nonrandomised studies of interventions and the newversion of the Cochrane risk of bias tool for randomised trials (RoB 2.0) [12]. We have also drawn onexperience from the work of the EQUATOR Networkin producing guidance for developing reporting guidelines [13].MethodsOver the years that we have been involved in the development of quality assessment tools and through involvement in different development processes, wenoticed that the methods used to develop each toolcould be mapped to a similar underlying process. Theproposed framework evolved through discussionamong the team, describing the steps involved in developing the different tools, and then grouping theseinto appropriate headings and stages.Results: Proposed frameworkThe Fig. 1 and Table 1 outline the proposed steps in ourframework, grouped into three stages. The table also includes examples of how each step was approached forthe tools that we have been involved in developing. Eachstep is discussed in detail below.Page 2 of 9Stage 1: initial stepsIdentify the need for a new toolThe first step in developing a new quality assessment(QA) tool is to identify the need for a new tool: What isthe rationale for developing the new tool? In theirguidance on developing reporting guidelines, Moher etal. [13] stated that “developing a reporting guidelines iscomplex and time consuming, so a compelling rationaleis needed”. The same applies to the development of QAtools. It may be that there is no existing QA tool for thespecific study design of interest; a QA tool is availablebut not directly targeted to the specific context required(e.g. tools designed for clinical interventions may not beappropriate for public health interventions), existingtools might not be up to date, new evidence on particular sources of bias may have emerged that is not adequately addressed by existing tools, or new approachesto quality assessment mean that a new approach isneeded. For example, QUADAS-2 and RoB 2.0 were developed as experience, anecdotal reports, and feedbacksuggested areas for improvement of the original QUADAS and Cochrane risk of bias tools [7]. ROBIS was developed as we felt there was no tool that specificallyaddressed risk of bias in systematic reviews [9].It is important to consider whether a completely newtool is needed or whether it may be possible to modifyor adapt an existing tool. If modifying an existing tool,then the original can act as a starting point, although inpractice, the new tool may look very different from theoriginal. Both QUADAS-2 [8] and the new Cochranerisk of bias tool used the original versions of these toolsas a starting point [12].Obtain funding for the tool developmentThere are costs involved in developing a new QA tool.These will vary depending on the approach taken butitems that may need to be funded include researchertime, literature searching, travel and subsistence for attending meetings, face-to-face meetings, piloting thetool, online survey software, open access publicationcosts, website fees and conference attendance for dissemination. We have used different approaches to fundthe development of quality assessment tools. QUADAS2 [8] was funded by the UK Medical Research CouncilMethodology Programme as part of a larger projectgrant. ROBIS, [9] ROBINS-I [11] and Cochrane ROB 2.0[12] were funded through smaller project-specific grants,and PROBAST [10] received no specific funding.Instead, the host institutions for each steering groupmember allowed them time to work on the project andcovered travel and subsistence for regular steering groupmeetings and conference attendance. Freely availablesurvey monkey software (www.surveymonkey.co.uk) wasused to run an online Delphi process.

Whiting et al. Systematic Reviews (2017) 6:204Page 3 of 9Fig. 1 Overview of proposed frameworkAssemble teamAssembling a team with the appropriate expertise is akey step in developing a quality assessment tool. Astool development usually relies on expert consensus,it is essential that the team includes people with anappropriate range of expertise. This generally includesmethodologists with expertise in the study designstargeted by the tool, people with expertise in QA tooldevelopment and also end users, i.e. reviewers whowill be using the tool. Reviewers are a group thatmay sometimes be overlooked but are essential to ensure that the final tool is usable by those for whom itis developed. If the tool is likely to be used in different content areas, then it is important to include reviewers who will be using the tool in all contexts. Forexample, ROBIS is targeted at different types of systematic reviews including reviews of interventions,diagnostic accuracy, aetiology and prognosis. We included team members who were familiar with all different types of review to ensure that the teamincluded the appropriate expertise to develop the tool.It can also be helpful to include reviewers with arange of expertise from those new to quality assessment to more experienced reviewers. Including representatives from a wide range of organisations can alsobe helpful for the future uptake and dissemination ofthe tool. Thinking about this at an early stage is helpful. The more organisations that are involved in thedevelopment of the tool, the more likely these organisations are to feel some ownership of the tool and towant to implement the tool within their organisationin the future. The total number of people involved intool development varies. For our tools, the number ofpeople involved directly in the development of eachtool ranged from 27 to 51 with a median of 40.Manage the projectThe size and the structure of the project team also needto be carefully considered. In order to cover an appropriate range of expertise, it is generally necessary toinclude a relatively large group of people. It may not bepractical for such a large group to be involved in theday-to-day development of the tool, and so it may be desirable to have a smaller group responsible for drivingthe project by leading and coordinating all activities, andinvolving the larger group where their input is required.For example, when developing QUADAS-2 and PROBAST, a steering group of around 6–8 people led thedevelopment of the tool, bringing in a larger consensusgroup to help inform decisions on the scope and contentof the tool. For ROBINS-I and Cochrane ROB 2.0, asmaller steering group led the development withdomain-based working groups developing specific areasof the tool.Define the scopeThe scope of the quality assessment tool needs to bedefined at an early stage. The Table 2 outlines keyquestions to consider when defining the scope. Toolsgenerally target one specific type of study. The specificstudy design to be considered is one of the first components to define. For example, QUADAS-2 [8] focused ondiagnostic accuracy studies, PROBAST [10] on prediction modelling studies and the Cochrane Risk of Biastool on randomised trials. Some tools may be broader,targeted at multiple related designs. For example,

Bias applicabilityDomain based signallingquestionsSignalling questions phrased soyes indicates low risk of biasBias relevanceDomain based signallingquestionsSignalling questions phrased soyes indicates low risk of biasSteering group (n 9)Diagnostic test accuracyBias applicabilityDomain based signalling questionsSignalling questionsphrased so yes indicateslow risk of bias1.4 Manage project1.5 Define scopeFace-to-face meetingSteering groupWeb-based survey piloting in reviews2.2. Agree items2.3. Produce first draft2.4. Pilot and refinewww.quadas.orgRecommended bymultiple organisationsincluding Cochrane,NICE and AHRQ,Italian and Japaneseversions3.2 Website3.3 Uptake3.4 TranslationsSingle publication guidance document asweb supplement3.1 PublicationStage 3: DisseminationStarting point originalQUADAS tool [7]Evidence review ofsources of bias andvariation2.1. Generate itemsStage 2: tool developmentPrediction modelling(prognostic and diagnostic)Systematic reviewsLarger group (n 18)Estonian and Portugueseversions in preparationRecommended by Cochrane,NICE and Estonia HealthInsurance Fundwww.robis-tool.infoSingle publication guidancedocument as web supplementWeb-based survey piloting inreviews and by steering groupmembers on key reviewsSteering groupFace-to-face meetingReview of existing QA tools how QA handled in existingoverviewsClassification of MECIR guidanceto identify sources of biasProject group (n 5)Steering group (n 10) largergroup (n 19)None to dateRecommended by CochranePrognosis GroupNot yet availablePrimary publication E&Edocument (in process)Web-based survey piloting inreviewsSteering groupWeb-based surveyKey publications on prognosticmodelling studiesWork on TRIPOD [21], CHARMS[22] and QUADAS-2 [8]Steering group (n 9)Larger group (n 40)No external funding1.3 Assemble teamMRC methodology project grantPart of MRCmethodologyprogramme grantNew tool specifically forprediction modelsPROBAST [10]1.2 Obtain fundingNew tool specifically for RoB inSRROBIS [9]Update for QUADASQUADAS-2[8]1.1 Identify needStage 1: initial stepsStageTable 1 Proposed steps to develop QA tools with examples drawn from existing tool developmentNone to dateNot yet, only recently availablewww.riskofbias.infoSingle publication guidancedocument on websiteDomain-based working groups,web survey, cognitiveinterviews, piloting by workinggroups and external reviewerson key papers, event to pilotversion 1.0Domain-based working groups steering groupFace-to-face meeting domainbased working groupSurvey of Cochrane reviewgroups and face-to-face meeting. Domain-based workinggroupsBiasDomain based signallingquestionsNon-randomised studies ofinterventionsSteering group (n 5)Domain-based working groupsand others (n 35)MIF grant from Cochrane andMRC methodology project grantNew tool specifically for NRS ofinterventionsROBINS-I [11]None to dateNot yet, only recently availablewww.riskofbias.infoNot yet published, guidancedocument on websiteDomain-based working groups 3 day piloting event additional piloting of refinedversionDomain-based working groups steering groupFace-to-face meeting domainbased working groupStarting point draft tool basedon original Cochrane RoB [23]tool. Evidence review of sourcesof bias.BiasDomain based signallingquestions algorithm fordomain level ratingsRandomised controlled trialsSteering group (n 8)Domain-based working groupsand others (n 38)MRC hubs project grantUpdate to Cochrane RoB toolRoB 2.0 [12]Whiting et al. Systematic Reviews (2017) 6:204Page 4 of 9

Whiting et al. Systematic Reviews (2017) 6:204Table 2 Question to consider when defining the scope What study designs will be targeted by the new tool? Will the tool consider only risk of bias (internal validity) or will it alsobe concerned with assessing applicability (external validity) andpossibly reporting quality? What is the definition of quality for the tool? How is risk of biasdefined? How are other components of quality defined (if included),e.g. applicability? What type of tool structure will be adopted, e.g. simple checklistdesign or a domain-based approach? How will quality items be rated within the tool?ROBINS-I targets all non-randomised studies of interventions rather than one single study design such ascohort studies. When deciding on the focus of the tool,it is important to clearly define the design and topicareas targeted. Trade-offs of different approaches needconsideration. A more focused tool can be tailored to aspecific topic area. A broader tool may not be as specificbut can be used to assess a wider variety of studies. Forexample, we developed ROBIS to be used to assess anytype of systematic review, e.g. intervention, prognostic,diagnostic or aetiology. Previous tools, such as theAMSTAR tool, were developed to assess reviews ofRCTs [14]. Key to any quality assessment tool is a definition of quality as addressed by the tool, i.e. definingwhat exactly the tool is trying to address. We have foundthat once the definition of quality has been clearlyagreed, then it becomes much easier to decide on whichitems to include in the tool.Other features to consider include whether to addressboth internal (risk of bias) and external validity (applicability) and the structure of the tool. The original QUADAS tool used a simple checklist design and combineditems on risk of bias, reporting quality and applicability.Our more recently developed tools have followed adomain-based approach with a clear focus on assessmentof risk of bias. Many of these domain-based tools also include sections covering applicability/relevance. How torate individual items included in the tool also forms partof the scope. The original QUADAS tool [7] used a simple‘yes, no or unclear’ rating for each question. The domainbased tools such as QUADAS-2, [8] ROBIS [9] andPROBAST [10] have signalling questions which flag thepotential for bias. These are generally factual questionsand can be answered as ‘yes, no or no information’. Sometools include a ‘probably yes’ or ‘probably no’ response tohelp reviewers answer these questions when there is notsufficient information for a more definite response. Theoverall domain ratings then use decision ratings like ‘high,low or unclear’ risk of bias. Some tools, such as ROBINS-I[11] and the RoB 2.0 [12], include additional domain levelratings such as ‘critical, severe, moderate or low’ and ‘low,some concerns, high’. We strongly recommend that at thisPage 5 of 9stage, tool developers are explicit that quality scoresshould not be incorporated into the tools. Numerical summary quality scores have been shown to be poor indicatorsof study quality, and so, alternatives to their use should beencouraged [15, 16]. When developing many of our tools,we were explicit at the scope stage that we wanted tocome up an overall assessment of study quality but avoidthe use of quality scores. One of the reasons for introducing the domain level structure first used with theQUADAS-2 tool was explicit to avoid users calculatingquality scores by simply summing the number of itemsfulfilled.Agreeing the scope of the tool may not be straightforward and can require much discussion between teammembers. An additional consideration is how decisionson scope will be made. Will this be by a single person,by the steering group and should some or all decisionsbe agreed by the larger group? The approach that wehave often taken is for a smaller group (e.g. steeringgroup) to propose the scope of the tool with the agreement reached following consultation with the largergroup. Questions on the scope can often form the firstdiscussion points at a face-to-face meeting (e.g. ROBIS[9] and QUADAS-2 [8]) or the first questions on a webbased survey (e.g. PROBAST [10]).As with any research project, a protocol that clearlydefines the scope and proposed plans for the development of the tool should be produced at an early stage ofthe tool development process.Stage 2: tool developmentGenerate initial list of items for inclusionThe starting point for a tool is an initial list of items toconsider for inclusion. There are various ways in whichthis list can be generated. These include looking at existing tools, evidence reviews and expert knowledge. Themost comprehensive way is to review the literature forpotential sources of bias and to provide a systematic review summarising the evidence for the effects of these.This is the approach we took for the original QUADAStool [7] and also the updated QUADAS-2 [8, 17, 18].Reviewing the items included in existing tools andsummarising the number of tools that included eachpotential item can be a useful initial step as it showswhich potential items of bias have been considered asimportant by previous tool developers. This process wasfollowed for the original QUADAS tool [7] and forROBIS [9]. Examining how previous systematic reviewshave incorporated quality into their results can also behelpful to provide an indication of the requirements of aQA tool. If you are updating a previous QA tool thenthis will often form the starting point for potential itemsto include in the updated tool. This was the case forQUADAS-2 [8] and the RoB 2.0 [12]. For ROBINS-I

Whiting et al. Systematic Reviews (2017) 6:204[11], domains were agreed at a consensus meeting, andthen expert working groups identified potential itemsto include in each domain. Generating the list ofitems for inclusion was, therefore, based on expertconsensus rather than reviewing existing evidence.This can also be a valid approach. The developmentof PROBAST used a combined approach of using anexisting tool for a related area as the starting point(QUADAS-2), non-systematic literature reviews andexpert input from both steering group members andwider PROBAST group [10].Agree initial items and scopeAfter the initial stages of tool development which canoften be performed by a smaller group, input from thelarger group should be sought. Methods for gaining input from the larger group include holding a face-to-facemeeting or a web-based survey. At this stage, the scopedefined in step 1.5 can be brought to the larger groupfor further discussion and refinement. The initial list ofitems needs to be further refined until agreement isreached on which items should be included in an initialdraft of the tool. If a face-to-face meeting is held, smallerbreak-out groups focussing on specific domains can be ahelpful structure to the meeting. QUADAS-2, ROBISand ROBINS-I all involved face-to-face meetings withsmaller break-out groups early in the developmentprocess [8, 9, 11]. If moving straight to a web-based survey, then respondents can be asked about the scope withinitial questions considering possible items to include.This approach was taken for PROBAST [10] and the original QUADAS tool [7]. For PROBAST, we also askedgroup members to provide supporting evidence for whyitems should be included in the tool [10]. Items shouldbe turned into potential questions/signalling questionsfor inclusion in the tool at this relatively early stage inthe development of the tool.Produce first draft of tool and develop guidanceFollowing the face-to-face meeting or initial surveyrounds, a first draft of the tool can be produced. The initial draft may be produced by a smaller group (e.g. steering group), single person, or by taking a domain-basedapproach with the larger group split into groups witheach taking responsibility for single domains. ForQUADAS-2 [8] and PROBAST [10], a single person developed the first draft which was then agreed by thesteering group before moving forwards. The first draft ofROBIS was developed following the face-to-face meetingby two team members. Initial drafts of ROBINS-I [11]and the RoB 2.0 [12] were produced by teams workingon single domains proposing initial versions for their domains. Drafts for each domain were then put togetherby the steering group to give a first draft of the tool.Page 6 of 9Once a first draft of the tool is available, it may be helpful to start producing a clear guidance document describing how to assess each of the items included in thetool. The earlier such a guide can be produced, the moreopportunity there will be to pilot and refine it alongsidethe tool.Pilot and refineThe first draft of the tool needs to go through a processof refinement until a final version that has agreement ofthe wider group is achieved. Consensus may be achievedin various ways. Online surveys consisting of multiplerounds until agreement on the final tool is reached are agood way of involving large numbers of experts in thisprocess. This is the approach used for QUADAS, [7],QUADAS-2 [8], ROBIS, [9] and PROBAST [10]. Ifdomain-based working groups were adopted for theinitial development of the tool, these can also be used tofinalise the tool. Members of the full group can thenprovide feedback on draft versions, including domainsthat they were not initially assigned to. This approachwas used for ROBINS-I and RoB 2.0. It would also befeasible to combine such an approach with a webbased survey.Whilst the tool is being refined, initial piloting workcan be undertaken. If a guidance document has beenproduced, then it can be included in the piloting process.If the tool is available in different formats, for examplepaper-based or Access database, then these could also bemade available and tested as part of the piloting. The research team may ask reviewers working on appropriatereview topics to pilot the tool in their review. Alternatively, reviewers can be asked to pilot the tool on a seriesof sample papers and to provide feedback on their experience of using the tool. An efficient way of completing such a process is to hold a piloting event wherereviewers try out the tool on a sample of papers whichthey can either bring with them or that are provided tothem. This can be a good approach to get feedback in atimely and interactive manner. However, there are costsassociated with running such an event. Asking reviewersto pilot the tool in ongoing reviews can result in delaysas piloting cannot be started until the review is at thedata extraction stage. Identifying reviews at an appropriate stage with reviewers willing to spend the extra timeneeded to pilot a new tool is not always straightforward.We held a piloting event when developing the RoB 2.0and found this to be very efficient in providing immediate feedback on the tool. We were also able to hold agroup discussion for reviewers to provide suggestionsfor improvements to the tool and to highlight any itemsthat they found difficult. For previous tools, we used remote piloting which provided helpful feedback but wasnot as efficient as the piloting event. Ideally, any piloting

Whiting et al. Systematic Reviews (2017) 6:204process should involve reviewers with a broad range ofexperience ranging from those with extensive experienceof conducting quality assessment of studies of a varietyof designs to those relatively new to the process.The time taken for piloting and refining the toolcan vary considerably. For some tools, such as ROBISand QUADAS-2, this process was completed inaround 6–9 months. For PROBAST and ROBINS-I,the process took over 4 years.Stage 3: disseminationDevelop a publication strategyA strategy to disseminate the tool is required. Thisshould be discussed at the start of the project but mayevolve as the tool is developed. The primary means ofdissemination is usually through publication in a peerreviewed journal. A more detailed guidance documentcan accompany the publication and be made available asa web appendix. Another option is to have dual publications, one reporting the tool and outlining how it wasdeveloped, and a second providing additional guidanceon how to use the tool. This is sometimes known as an‘E&E’ (explanation and elaboration) publication and isan approach adopted by many reporting guidelines [13].Establish a websiteDeveloping a website for the tool can help with dissemination. Ideally, the website should be developed beforepublication of the tool so that details can be included inthe publication. The final version of the tool can beposted on the website together with the full guidancedocument. Details on who contributed to the tool development and any funding should also be acknowledged onthe website. Additional resources to help reviewers usethe tool can also be posted there. For example, the ROBIS(www.robis-tool.info) and QUADAS (www.quadas.org)websites both contain Microsoft Access database that reviewers can use to complete their assessment and templates to produce graphical and tabular displays. They alsocontain links to other relevant resources and details oftraining opportunities. Other resources that may be usefulto include on tool websites include worked examples andtranslations of the tools, where available. QUADAS-2 hasbeen translated into Italian and Japanese, and the translations of these tools can be accessed via its website. If thetool has been endorsed or recommended for use by particular organisations (e.g. Cochrane, UK National Institutefor Health and Care Ex

practice, the new tool may look very different from the original. Both QUADAS-2 [8] and the new Cochrane risk of bias tool used the original versions of these tools as a starting point [12]. Obtain funding for the tool development There are costs involved in developing a new QA tool. These will vary depending on the approach taken but

Related Documents:

Bruksanvisning för bilstereo . Bruksanvisning for bilstereo . Instrukcja obsługi samochodowego odtwarzacza stereo . Operating Instructions for Car Stereo . 610-104 . SV . Bruksanvisning i original

10 tips och tricks för att lyckas med ert sap-projekt 20 SAPSANYTT 2/2015 De flesta projektledare känner säkert till Cobb’s paradox. Martin Cobb verkade som CIO för sekretariatet för Treasury Board of Canada 1995 då han ställde frågan

service i Norge och Finland drivs inom ramen för ett enskilt företag (NRK. 1 och Yleisradio), fin ns det i Sverige tre: Ett för tv (Sveriges Television , SVT ), ett för radio (Sveriges Radio , SR ) och ett för utbildnings program (Sveriges Utbildningsradio, UR, vilket till följd av sin begränsade storlek inte återfinns bland de 25 största

Hotell För hotell anges de tre klasserna A/B, C och D. Det betyder att den "normala" standarden C är acceptabel men att motiven för en högre standard är starka. Ljudklass C motsvarar de tidigare normkraven för hotell, ljudklass A/B motsvarar kraven för moderna hotell med hög standard och ljudklass D kan användas vid

LÄS NOGGRANT FÖLJANDE VILLKOR FÖR APPLE DEVELOPER PROGRAM LICENCE . Apple Developer Program License Agreement Syfte Du vill använda Apple-mjukvara (enligt definitionen nedan) för att utveckla en eller flera Applikationer (enligt definitionen nedan) för Apple-märkta produkter. . Applikationer som utvecklas för iOS-produkter, Apple .

och krav. Maskinerna skriver ut upp till fyra tum breda etiketter med direkt termoteknik och termotransferteknik och är lämpliga för en lång rad användningsområden på vertikala marknader. TD-seriens professionella etikettskrivare för . skrivbordet. Brothers nya avancerade 4-tums etikettskrivare för skrivbordet är effektiva och enkla att

Den kanadensiska språkvetaren Jim Cummins har visat i sin forskning från år 1979 att det kan ta 1 till 3 år för att lära sig ett vardagsspråk och mellan 5 till 7 år för att behärska ett akademiskt språk.4 Han införde två begrepp för att beskriva elevernas språkliga kompetens: BI

**Godkänd av MAN för upp till 120 000 km och Mercedes Benz, Volvo och Renault för upp till 100 000 km i enlighet med deras specifikationer. Faktiskt oljebyte beror på motortyp, körförhållanden, servicehistorik, OBD och bränslekvalitet. Se alltid tillverkarens instruktionsbok. Art.Nr. 159CAC Art.Nr. 159CAA Art.Nr. 159CAB Art.Nr. 217B1B