Evidence-Based Intervention Review Criteria & Review Rubrics

3y ago
30 Views
2 Downloads
859.75 KB
34 Pages
Last View : 6d ago
Last Download : 3m ago
Upload by : Nixon Dill
Transcription

E VIDENCE -B ASED I NTERVENTIONR EVIEW C RITERIA & R EVIEW R UBRICSApplied criteria to identify evidence-based interventions1

TABLE OF CONTENTS1. Introduction . .OverviewEstablishing Criteria to Identify Evidence-Based InterventionsReviewer Guide and Quality Review Rubric32. Appendices .A: Reviewer Guide to the Criteria for EvidenceB: Quality Review RubricC: Additional Rubrics72

1. IntroductionOverviewThis document contains information regarding the review criteria andrelated rubrics used to identify evidence-based PK-6 literacy interventions.The rubrics were applied to specific interventions that included one or moreof the following domains of literacy specific skills: print concepts,phonological awareness, alphabet knowledge, phonics/decoding,irregular/sight words, fluency, oral language, vocabulary, comprehension,spelling and writing. For more information about the reviews, please seeReviewed Intervention Brief 2016.EstablishingCriteria toIdentifyEvidence-BasedInterventionsCriteria were established to determine that a particular intervention showedevidence of effect for a given area of literacy, for a given grade, and for agiven tier of support (classwide 1, targeted 2, and/or intensive 3). There werethree major action steps in developing the criteria :1. Establish and convene task group. The purpose of the task groupwas to establish criteria, the review rubric, and process to reviewuniversal, targeted and intensive programs and interventions.This 12 member group was established in 2014, convened weeklyand included one national expert, who collaborated across twoadditional national experts: Dr. Jeanne Wanzek 4 – Florida State University (lead),expertise in K-6 literacy, served as the primary developerof the criteria and rubric,o Dr. Laura Justice – Ohio State University, expertisein PK literacy, worked directly with Dr. Wanzek toensure PK representation,o Dr. Andy Porter – University of Pennsylvania,expertise in alignment of standards, worked todevelop alignment criteria and rubric.The purpose of the national expert was to work directly with thetask group to develop the criteria and review rubric based oncurrent research. Dr. Wanzek served as the primary developer ofthe criteria/rubric with guidance from the task group.2. Establish and convene vetting group. The purpose of this groupwas to provide feedback, guidance and input on products andprocesses developed by the task group. This 31 member groupwas established in 2014, and met a total of three times to providecritical feedback and guidance to the work.3. Develop Criteria and Rubric. The task group, including thenational expert, were primarily responsible for developing the1This included evidence-based interventions or programs shown to be effective with entire classrooms of students.This included evidence-based interventions or programs shown to be effective at Tier 2/targeted level of an MTSS framework.3 This included evidence-based interventions or programs shown to be effective at Tier 3/intensive level of an MTSS framework.24Dr. Jeanne Wanzek has since moved to Vanderbilt University.3

criteria and rubric. Dr. Wanzek reviewed extant research, anddeveloped criteria/rubric drafts. The work was reviewed weekly,and guidance provided by the task group. Feedback and inputwas provided by the vetting group to incorporate directly intoproducts.In order to determine whether a particular intervention showedevidence of effect for a given area of literacy, for a given grade, andfor a given tier of support, Dr. Wanzek developed criteria todetermine the extent of evidence of interventions. The extent ofevidence was based on the quality, replicability, generalizability andpositive results of published research and/or technical reports forany given intervention.The national expert, working with the task group, adapted criteriafrom previous work at the national level, including the What WorksClearinghouse (WWC) and the National Center on IntensiveInterventions (NCII). For details about this work, see Dr. JeanneWanzek’s white paper at: iowareadingresearchcenter.orgThe adapted criteria is based on internal and external validity asprimary considerations of the quality, as well as the extent ofevidence within and across studies. Brief descriptions of eachfollow:Internal Validity. Internal validity addresses how well aresearch study was designed to reduce the impact ofthings not being studied. This increases the likelihood thatpositive results are due to the intervention, and not thingsoutside of the study. Internal validity criteria were:Research Design, Evidence of Confounding Factors,Group/Person Conducting the Study, Developer ofAssessment, Data Collection and Adequacy of Measures,and Data Analysis Methods. External Validity. External validity addresses the extentto which the study and its findings apply to other practicalsettings beyond the controlled research study. Externalvalidity criteria were: Implementation, Reading DomainsAddressed, Student Outcomes Measured, and TreatmentAcceptability. Overall Findings. This area addresses the extent ofpositive overall findings of a given study [RFP Review].Specifically, the study findings must demonstratestatistical significance or an effect size of .25 or higher onone or more reading domains and no statisticallysignificant negative effects. In the case of single casedesigns, the reading data must demonstrate at least threeinstances of an effect.4

A Quality Review Rubric was developed based on the criteria. Thecriteria descriptions are in Appendix A; the rubric is in AppendixB. The rubric was used to review interventions for both the RFPand Iowa Reviews.Reviewer Guideand QualityReview RubricOnce the quality criteria were established, Dr. Wanzek, working with thetask group, developed:1. A Reviewer Guide to the Criteria and Rubric – Appendix A. Thisguide provides detailed information about each criteria within therubric:a General Information. This includes the following: IntendedGrade Levels, Reading Domains Covered, RecommendedDosage of Program, Number of Lessons Available, PlacementAssessment Included, Intended Population of Students,Recommended Implementers, Recommended GroupingFormats, Parent/Home Connection Strategies/MaterialsIncluded, Number of Studies Submitted, Number of PeerReviewed Studies Submitted, Costs of Materials, Training,Hours of Training and any Additional Costsb Evidence for Program Effectiveness. This includes thefollowing: Internal Validity----Study Design, Group/PersonConducting the Study, Developer of Assessment,Technical Adequacy of measures to determine effectsize or evidence of improvement, Data Collection,Data Analysis, Evidence of Confounding Factors External Validity----Group/Person ImplementingIntervention/Program, Dosage: Session Time andFrequency, Fidelity of Implementation, ReadingDomains Addressed, Grouping Format, StudentOutcomes Measured, Treatment Acceptability Findings----Overall Findings and Long Term Findings Summary of Evidence----Participants and Extent ofEvidence2. A Quality Review Rubric – Appendix B. The rubric has all the criteriain a usable review rating form. Generally, there are three levels ofratings: Desired, Acceptable and Undesired. Scores are establishedacross each area – recommended scores are included within therubric in Appendix B. The rubric is designed to be used inconjunction with the Reviewer Guide.Although Appendix A & B provide neither direct technical assistance nor astep-by-step guide for reviewing evidence, both the rubric and guide areavailable for teams to use to establish the evidence-base of any givenintervention.5

To use the Reviewer Guide and Quality Review Rubric, reviewer teamsshould be established, and follow similar procedures as outlined below:1. Determine Interventions to Review. Within the team, determinehow to select interventions to review. This may be done via anonline survey to understand current intervention practices, orreviewing current literature, or in consultation with local AreaEducation Agencies or Institutes of Higher Education.2. Determine Study Criteria. Teams should complete extensiveliterature reviews for each intervention, and therefore should put inplace study-inclusion criteria. See 3. The Reviews – Iowa Review2015 for sample study criteria.3. Implement Tight Review Procedures. Teams should be trained onthe Reviewer Guide and Quality Review Rubric before using it tomake decisions. Training should include an inter-rater reliabilitycomponent to ensure reviewers can effectively and consistentlyapply the criteria. At least two reviewers should be assigned to thesame information to review and score independently. All reviewersshould be free of any conflicts of interest. Subsequent to finalscoring, discrepancies must be reconciled and final scores submittedfor final analysis.4. Apply Thresholds. After all reviews are completed across all studiesfor each intervention, the threshold of acceptable in the area ofExtent of Evidence should be used to determine whether any givenintervention is evidence-based for a particular area of literacy in aspecific grade or age-span. This should include the following criteria:One study with high internal and external validity and positive findings withno studies showing negative findings. In the case of Intensive Interventionswhere a single case was used, five or more studies with high internal andexternal validity.6

2. Appendices includes the following:A. Reviewer Guide to the Criteria for Evidence . 7B. Quality Review Rubric . 16C. Additional Rubrics [RFP Review only] . 23A. ReviewerGuide to theCriteria forEvidenceThe Reviewer Guide was developed by Dr. Jeanne Wanzek, and isdesigned to be used with the Quality Review Rubric within a verytight review process [described in 1. Review Guide and QualityReview Rubric].To best use the Reviewer Guide and rubric, a review team mustuse one or more of the following items:1. Materials or studies submitted by a vendor in response toa Request for Information or Proposal; or2. Materials or studies obtained online or through a journalsearch; or3. The academic intervention tool chart from the reviewsconducted by the National Center on IntensiveIntervention (NCII) located uctional intervention-tools. Generally, the reviews for NCII are fortargeted and/or Intensive interventions and not classwideinterventions. Be sure to click on the program link foreach specific study to receive the detailed informationneeded to complete the relevant aspects of the rubric.For any rubric information not available in the NCIIreview, the study reviewed can be located to provide theadditional information.Reviewer Guide to the Criteria and RubricAuthor: Dr. Jeanne WanzekInstructionsGeneral InformationUse the materials to identify the following information. If using a review in the tools chart from the NationalCenter on Intensive Interventions as one piece of information, click on the program to find descriptiveinformation on the intervention.Intended Grade LevelsMark the grade levels the program is intended to serveReading Domains7

Mark the domains the program is intended to address.Print ConceptsPhonological AwarenessAlphabet KnowledgePhonics/DecodingIrregular/Sight WordsFluencyOral rRecognizing the components of written language (e.g., words,sentences, print moving from left to right, etc.)Recognizing and manipulating the sound system in spoken languageNaming, distinguishing, and writing the letters of the alphabetIdentifying sound-symbol relationships and using them to read andspell words.Reading words in which some or all or the letters do not representmost commonly associated sounds; Recognizing words (regular andirregular) by sightAbility to perform reading skills with quickly, accurately, and withproper expressionUnderstanding of the phonology, grammar, morphology,vocabulary, discourse, and pragmatics of the English languageUnderstanding and using words when listening, speaking, reading,and writingUnderstanding the intended meaning of spoken and writtenlanguageAbility to write or name the letters of a wordComposing text to express ideas or opinionsAny other literacy-related domains; please specifyRecommended Dosage of ProgramIdentify the number of weeks of instruction provided/recommended in the program, the recommendedsession length per lesson (in minutes), and the recommended frequency of lessons (number of sessions perweek). Mark N/R if the information is not provided.Number of Lessons AvailableIdentify the number of lessons provided in the base materials and the number of lessons provided in anyextended/supplemental materials that can be purchased. Mark N/A if base or extended lessons are notprovided/available.Placement Assessment IncludedIndicate whether a placement assessment is included to help teachers be gin students at the appropriatelesson to meet their needs.Intended Population of StudentsMark the population(s) of students the program is intended to serve. Mark N/R if the information is notreported.Recommended ImplementersMark the recommended implementers for the program. Mark N/R if the information is not reported.Recommended Grouping FormatsMark the grouping formats that are recommended for program implementation. Mark N/R if the informationis not reported.8

Parent/Home Connection Strategies/Materials IncludedIndicate whether the program includes strategies and/or provides specific materials for connecting withparents or families at home.Study InformationIdentify the number of primary analysis studies of the intervention that w ere submitted. Of these studies,identify how many are published in peer-reviewed outlets.CostsIdentify the cost per pupil of the base materials, the training cost, the recommended hours of training toimplement the program, and any costs for extended m aterials or additional training. Mark N/A if the costcategory does not apply to the program. Mark N/R if the information is not reported.Evidence for Program EffectivenessFor each unique, primary analysis study of the program submitted or reviewed prev iously by the NationalCenter on Intensive Interventions, complete the ratings on internal validity, external validity, and findings. Aprimary analysis study of the program is an examination of the effect of an intervention on a particularsample (e.g., a set of students or schools). Studies of the way in which an intervention was implementedwithout evidence of impact, literature reviews, or meta -analyses are not considered primary analyses of theprogram.Internal Validitya. Research Design (this information is available in reviews on NCII under “Design”)Desired State/Optimal: Studies that randomly assign participants to study conditions (students orclusters can be randomly assigned) and demonstrate no significant overall or differential attrition(attrition bias). The What Works Clearinghouse (WWC) guidance is used in determining thesignificance of the overall and differential attrition or attrition bias (p. 11 -14):http://ies.ed.gov/ncee/wwc/pdf/reference resources/wwc procedures v3 0 standards handbook.pdfA higher rating is also given for data demonstrating the intervention and comparison groups werenot statistically significantly different at pretreatment. In the case of Intensive Interventions where asingle case design may be used, the baseline data for all cases should be stable prior to interventionimplementation.Acceptable: Randomized control trials with significant overall or differ ential attrition, or treatmentcomparison studies without random assignment (quasi-experimental studies) with study conditionsmatched on several pretreatment variables (e.g., demographics, reading achievement), referred to astenable quasi-experiments on the NCII reviews.For Intensive Interventions, single case design with at least 3 replications and at least 3 data pointsper phase of the design.Intervention and comparison groups that are statistically significantly different at pretreatment mustbe within .25 SD difference at pretreatment, and outcomes of interest must be adjusted for thesepretreatment differences in the analyses.In the case of Intensive Interventions where a single case design is used, the majority of cases (mustbe more than 3 cases) must show stable baseline data.9

Undesired State: Study designs not included in desired or acceptable ratings, quasi -experimentaldesigns without adequate matching, or single case designs with less than 3 replications of effects orless than 3 data points per phase of the design.b. Group/Person Conducting StudyDesired State/Optimal: The study was conducted by independent evaluators who do not have aconflict of interest related to the intervention’s impact (e.g., financial interest in the intervention).Acceptable: The study was conducted by those who have a possible conflict of interest (e.g.,developers of the intervention but with no financial interest in the intervention).Undesired State: The study was conducted by persons with a direct conflict of interest in theintervention.c. Developer of AssessmentDesired State/Optimal: The majority of the reading domain assessments used in the study wer edeveloped by someone other than the authors/developers/vendors of the intervention.Acceptable: Less than half of the reading domain assessments used in the study were developed bythe vendor but not the authors/developers of the intervention.Undesired State: All of the reading domain assessments used in the study were developed by theauthors/developers of the intervention.d. Technical Adequacy of Measures Used to Determine ES or Evidence of Improvement (reliabilityinformation is available in reviews on NCII in the measures table under “Measures Targeted” and“Measures Broader”)Desired State/Optimal: Reliability coefficients for all reading domain measures used in the studywere .70 with most of the reliability coefficients .80. Validity coefficient s for all reading domainmeasures used in the study were .30 with most of the validity coefficients .50Acceptable: Reliability coefficients for all reading domain measures used in the study were .70.Validity coefficients for all reading domain measures used in the study were .30.Undesired State: Reliability coefficients for most reading domain measures used in th e study were .70 or reliability coefficients were not provided for most reading domain measures. Validitycoefficients for most reading domain measures used in the study were .30 or validity coefficientswere not provided for most reading domain measures.e. Data CollectionDesired State/Optimal: The reading domain outcome data was collected by assessors blind to thestudy conditions, and reliability of the assessors .90. In the case of Intensive Interventions whensingle case design is used, there is more than one assessor and reliability of th e assessors is .90.Acceptable: The reading domain outcome data was collected by assessors blind to part of the studyconditions and reliability of the assessors is .80.Undesired State: The reading domain outcome data was collected by assessors who were not blind tostudy conditions, or the reliability of the assessors .80. In the case of Intensive Interventions wheresingle case design was used, a single assessor was used.10

f.Data AnalysesDesired State/Optimal: The unit of assignment to condition (e.g., student, class, school) matches theunit of analysis and/or clustering/nesting of students in the unit of assignment is taken into account,OR when multiple comparisons are conducted the p value is adjusted to control Type I error.Acceptable: The unit of assignment to condition (e.g., student, class, school) matches the unit ofanalysis and/or clustering/nest

3 1. Introduction Overview This document contains information regarding the review criteria and related rubrics used to identify evidence-based PK-6 literacy interventions. The rubrics were applied to specific interventions that included one or more

Related Documents:

about evidence-based practice [2] Doing evidence-based practice means doing what the research evidence tells you works. No. Research evidence is just one of four sources of evidence. Evidence-based practice is about practice not research. Evidence doesn't speak for itself or do anything. New exciting single 'breakthrough' studies

1. It uses a definition of evidence based on inferential effect, not study design. 2. It separates evidence based on mechanistic knowledge from that based on direct evidence linking the intervention to a given clinical outcome. 3. It represents the minimum sufficient set of steps for building an indirect chain of mechanistic evidence. 4.

Subsea Intervention Type 1 (Class A) - Light (riserless) Intervention Type 2 (Class B) - Medium Intervention Type 3 (Class C) -Heavy Intervention Well and manifold installation Maintenance -scale squeeze -chemical injection Increasing demand for mature fields Image Ref - Subsea Well Intervention Vessel and Systems

Types of Evidence 3 Classification of Evidence *Evidence is something that tends to establish or disprove a fact* Two types: Testimonial evidence is a statement made under oath; also known as direct evidence or prima facie evidence. Physical evidence is any object or material that is relevant in a crime; also known as indirect evidence.

evidence -based approach to practice and learning; so, since 2005, the concept of evidence- based medicine was became wider in 2005 to “evidence -based practice” in order to concentrate on more sharing evidence -based practitioners attitude towards evidence -based practice paradigm .

Evidence-Based ” Journal series : All available online through AtlanticHealth. Evidence-Based Medicine, Evidence-Based Mental Health, Evidence-Based Nursing Unflitered Sources: Each one of these unfiltered sources has the ability to limit a search to relevant evidence as those listed in the pyramid.

that may better facilitate the adoption of evidence-based policing and evidence-based funding. Synthesizing research evidence for use in practice In 1998, Lawrence Sherman advocated for “evidence-based policing,” arguing that “police practices should be based on scientific evidence a

Bribery, fraud or other corrupt practice by University employees will be treated as a serious disciplinary offence, potentially resulting in dismissal without notice and/or legal action. In addition, there might, depending on the circumstances, be a criminal investigation by the police or other relevant authorities. 4 2.2 Reporting and Investigation Procedures 2.2.1 Corporate hospitality Where .