Guidelines For Working With Third-Party Evaluators

2y ago
11 Views
3 Downloads
1.07 MB
67 Pages
Last View : 1m ago
Last Download : 3m ago
Upload by : Kelvin Chao
Transcription

Center to Improve Project PerformanceGuidelines for Working with Third-Party EvaluatorsAuthorsSarah Heinemeier, Compass Evaluation & ResearchAnne D’Agostino, Compass Evaluation & ResearchJill Lammert, WestatThomas A. Fiore, WestatJune 24, 2014Prepared for:Office of Special Education ProgramsU.S. Department of EducationWashington, DCPrepared by:WestatAn Employee-Owned Research Corporation 1600 Research BoulevardRockville, Maryland 20850-3129(301) 251-1500

About these GuidelinesThese Guidelines were developed as part of the Center to Improve Project Performance (CIPP)operated by Westat for the U.S. Department of Education, Office of Special Education Programs(OSEP). The authors thank our external reviewers Deborah Daniels-Smith, Amy GaumerErickson, and Shavaun Wall, and the OSEP Project Officers who provided input.Suggested Citation:Heinemeier, S., D’Agostino, A., Lammert, J.D., & Fiore, T.A. (2014). Guidelines for Working withThird-Party Evaluators. Rockville, MD: Westat.The Center to Improve Project Performance has been funded with Federalfunds from the U.S. Department of Education, Office of Special EducationPrograms, under contract number ED-OSE-13-C-0049. The project officer isDr. Patricia Gonzalez. The content of this publication does not necessarilyreflect the views or policies of the U.S. Department of Education nor doesmention of trade names, commercial products, or organizations implyendorsement by the U.S. government.Guidelines for Working with Third-Party Evaluatorsi

Overview of the Center to Improve Project PerformanceFirst formed in 2008, CIPP’s overall mission is to advance the rigor and objectivity of evaluationsconducted by or for OSEP-funded projects so that the results of these evaluations can be used byprojects to improve their performance and used by OSEP for future funding decisions, strategicplanning, and program performance measurement. CIPP is operating under its second five-yearcontract.The first CIPP contract provided summative evaluation support and oversight to 11 projects, selectedby OSEP, in planning and executing their summative evaluations. CIPP staff worked with project andOSEP staff to refine each project’s logic model and develop its summative evaluation design. Based onthe evaluation design and plan, CIPP staff oversaw project summative evaluation activities andprovided technical assistance (TA), as needed, to the grantees by selecting samples; developing draftinstruments; monitoring data collection and performing reliability checks; analyzing study data;providing accurate descriptions of the methods and valid interpretations of findings; and organizing,reviewing, and editing project evaluation reports.The second CIPP contract continues the work with the selected projects from the prior contract.Additionally, beginning in 2014, CIPP will provide intensive TA to 16 of OSEP’s largest grantees in thedevelopment of their logic models and their formative evaluation plans. Related to the work on projectevaluations, CIPP staff will work with OSEP staff to improve the consistency, objectivity, and rigor ofOSEP’s 3 2 evaluations, a formal process applied to projects funded in excess of 500,000 to evaluatetheir implementation and early outcomes following Year 2 of their grant. Also, CIPP will continue toprovide TA in evaluation to OSEP-funded projects on request, prepare a variety of TA products focusedon evaluation issues, and provide presentations on evaluation through Webinars and conferences.Contact Information:Thomas Fiore, CIPP Project DirectorWestatThomasFiore@westat.comJill Lammert, CIPP Assistant Project DirectorWestatJillLammert@westat.comPatricia Gonzalez, Project OfficerOffice of Special Education ProgramsU.S. Department of EducationPatricia.Gonzalez@ed.govGuidelines for Working with Third-Party Evaluatorsii

CENTER TO IMPROVE PROJECT PERFORMANCE (CIPP)GUIDELINES FOR WORKING WITH THIRD-PARTYEVALUATORSContentsIntroduction . 1Part 1. Making the Most of a Third-Party Evaluation . 31.1 Determining what is needed from the third-party evaluation . 31.2 Benefits and limitations of working with a third-party evaluator . 61.3 Determining when to bring a third-party evaluator on board . 71.4 Developing a third-party evaluator scope of work . 81.5 Creating an evaluation budget . 8Part 2. Finding and Hiring a Third-Party Evaluator .102.1 Developing a Request for Proposals . 102.2 Navigating the solicitation and review process . 132.2.1Assessing the applicant’s qualifications, background, and experience . 142.2.2Checking references . 162.3 Preparing the Third-Party Evaluation Contract . 162.3.1Determining the nature of the financial arrangement . 172.3.2Outlining expectations/requirements for human subjects protections and protectingdata confidentiality . 182.3.3Identifying which products the project team expects to receive from the Third-PartyEvaluator. 20Part 3. Monitoring and Managing the Third-Party Evaluator’s Work .213.1 Establishing a strong working relationship with a third-party evaluator . 213.1.1Setting reasonable goals and expectations . 213.1.2Defining decision-making responsibility . 223.2 Maintaining regular communication . 233.3 Keeping track of evaluation progress . 243.4 Addressing problems with the third-party evaluation . 293.4.1Dissolving the third-party evaluation contract . 29Part 4. Concluding the Evaluation Project .30Guidelines for Working with Third-Party Evaluatorsiii

APPENDICES. 32Appendix A. An Evaluation Primer. 33Developing evaluation questions . 35Identifying high quality outcomes. 36Selecting an evaluation design . 38Creating a data analysis plan . 43Constructing a data collection plan. 46Developing a comprehensive timeline . 50Appendix B. Evaluation Needs Assessment Template . 51Appendix C. Budgeting Guidance . 53Appendix D. Time Frame Estimates for Common Data Collection Activities . 58Appendix E. Common Practices for Protecting Personally Identifiable Information . 60Appendix F. Recommended Readings on Research/Evaluation Methodology . 61Exhibits in Main TextExhibit 1. Sample Evaluation Needs Assessment for the Anywhere State SLP Support Project .4Exhibit 2. Benefits and Limitations of Working with a Third-Party Evaluator .6Exhibit 3. The Influence of Timing on Expectations for Third-Party Evaluations .7Exhibit 4. Keeping Third-Party Evaluations Independent .23Exhibit 5. Evaluation Progress Checklist .26Exhibit 6. Evaluation Close-Out Tasks .31Exhibits within AppendicesExhibit A1. The Anywhere State Speech and Language Pathologist (SLP) Support Project . 35Exhibit A2. Anywhere State SLP Support Project Evaluation Questions and Hypotheses . 38Exhibit A3. Evaluation Designs in the Anywhere State SLP Project Evaluation . 40Exhibit A4. Evaluation questions aligned with Anywhere State Speech Therapist SLP Support projectgoals, strategies/activities, outputs, and outcomes . 41Exhibit A5. Sample Data Analysis Plan Template for the Anywhere State Speech Therapist Supportproject . 44Exhibit A6. Sample evaluation data collection schedule for the Anywhere State Speech TherapistSupport Project . 47Exhibit A7. Sample Data Collection Summary Table for the Anywhere State Speech Therapist SupportProject . 48Exhibit A8. Anywhere State SLP Support Project’s Evaluation Timeline: Year 1 . 50Exhibit D1. Focus Group/Interview Timeline . 58Exhibit D2. Web-based Survey Timeline . 59Guidelines for Working with Third-Party Evaluatorsiv

IntroductionGenerally, the purpose of an evaluation is to provide information on a project’s implementation andoutcomes. This includes providing qualitative and quantitative information on how well the projectcomponents have been implemented and analyzing the extent to which the project’s objectives andoutcomes have been achieved. The results of such evaluations provide project implementers withevidence to make decisions about project improvements, expansion, and sustainability; assess efficiencyand guide cost-containment strategies; and facilitate replication in other settings. More importantly,evaluation results can provide information on a project’s impact—information that can be used by thefunder, the Office of Special Education Programs (OSEP), and by other key stakeholders to make anassessment of the nature and scope of project achievements. OSEP grantees are required to report ontheir project’s accomplishments using tools such as the Annual Performance Report. Grantees useevaluations to identify what to measure for this reporting and to plan for and track the process ofcollecting, analyzing, and reporting on each desired accomplishment or evaluation metric. Thisdocument is written to assist grantees and their OSEP Project Officers in planning for, finding and hiring,and working with third-party evaluators to design, implement, and complete a project evaluation.Evaluations typically feature three components targeted at three distinct lines of inquiry: progressmonitoring, formative evaluation, and summative evaluation. A progress monitoring component examines the extent to which the project is progressingtoward attaining its objectives and yearly benchmarks. Methods used often rely onadministrative records and on descriptive (e.g., frequency of responses, measures of centraltendencies) and correlational (i.e., exploring relationships among variables) statisticaltechniques.A formative component addresses questions related to how well the project’s components andstrategies are being implemented. Methods commonly include qualitative techniques such asinterviews and observations and quantitative techniques such as surveys, and descriptive andcorrelational statistics.A summative component addresses the question of the effectiveness of the project in achievingits goals and desired impact (including impact on students) and identifies features/componentsof the project that were unique and/or effective (or ineffective). Summative methods oftenfocus on quantitative methods such as descriptive, correlational, and advanced statistics, butalso can include qualitative analysis of observational, interview, and open-ended survey data.Appendix A contains more information on each of these components. Some readers may find it helpfulto review Appendix A before proceeding. Lammert, Heinemeier and Fiore (2013) is another goodresource.Grantees often choose to work with a third-party evaluator—a qualified professional trained andexperienced in the techniques to be used in the evaluation—who can help the project conduct any or allof the evaluation components listed above. This product is prepared under the assumption thatgrantees have already decided to hire a third-party evaluator, although the tasks that may be assignedto the third-party evaluator may vary considerably from one project to the next. Throughout, weprovide ideas, tips, strategies, and suggestions grantees may find useful to make the most of anevaluation that incorporates a third-party evaluator. The document presents a discussion of theGuidelines for Working with Third-Party Evaluators1

benefits, drawbacks, and limitations of using a third-party evaluator and practical guidelines for creatinga third-party evaluation scope of work, developing a Request for Proposals, soliciting bids for andcontracting with a third-party evaluator, and monitoring and managing the work of the third-partyevaluator.Part 1 of this document provides general considerations for grantees who want to make the most of athird-party evaluation. Following this, Part 2 discusses the steps involved in finding and hiring a thirdparty evaluator, and provides guidance on creating a Request for Proposals, completing the solicitationprocess, and preparing and executing a contract for services.Part 3 discusses specific guidance on monitoring and managing the work of the evaluation overall,including how to create a successful working relationship with the third-party evaluator and how toknow when the evaluation is—and is not—proceeding as planned or meeting project needs. Part 4focuses on wrapping up the evaluation project. Finally, the Appendices include a primer on evaluationdesign and planning (for those readers who may want some additional information on this topic),sample documents and templates that may provide further ideas and guidance for working with a thirdparty evaluator, information on practices for protecting confidentiality, and recommended readings onresearch and evaluation methodology.Guidelines for Working with Third-Party Evaluators2

Part 1. Making the Most of a Third-Party EvaluationTypically, a third-party evaluator can bethought of as a “critical friend” whoprovides support, assistance, andfeedback to the project through theformative and summative methods ofevaluation. To this end, this document iswritten to help ensure a project makesthe most of its investment in a thirdparty evaluator—which requiresgrantees have some exposure to andfamiliarity with evaluation basics.Appendix A includes a brief primer onevaluation basics; grantees may findreviewing that information useful beforeproceeding with the rest of thisdocument. 11.1Top 5 Tips for Working with Third-Party Evaluators1. Hire as early as possible (such as during the applicationdevelopment or planning stages) even if only toconceptualize and design your evaluation;2. Expect to devote time to the evaluation—even if only inthe form of communication and monitoring theevaluation’s progress;3. Conduct an evaluation needs assessment—use thefindings to create a contracted scope of work for thethird-party evaluator;4. Communicate regularly—keep regular track ofevaluation activities and any implementation issuesthat arise; and5. Receive interim reports and work products at regularintervals—monitor implementation of activities and usefeedback to make project improvements.Determining what is needed from the third-party evaluationAt the start of a project, grantees may (a) have a complete evaluation plan, (b) need to revise or updatethe project’s evaluation plan, or (c) need to develop an evaluation plan for the project. Grantees thathave a complete evaluation plan may elect to work with a third-party evaluator to complete specifictasks. Grantees that need to revise, update, or develop a plan may choose to work with a third-partyevaluator to complete these design tasks. The third-party evaluator may then continue to work with theproject to conduct the evaluation or the grantee may elect either to do the evaluation work internally—especially if the evaluation will be primarily formative—or to search for and hire a different third-partyevaluator.The first step in developing an evaluation plan is to identify the project’s goals, strategies, outputs,outcomes, and the evaluation questions (see Appendix A). This information can then be used tocomplete an evaluation needs assessment, like in the example presented in Exhibit 1. Grantees areencouraged to review Appendix A or other evaluation resources (see Appendix F) if any of the items orterms in the needs assessment are unfamiliar. An evaluation needs assessment can help granteesidentify the specific tasks that need to be conducted for the evaluation, including those that will becontracted to a third-party evaluator. Ideally, the needs assessment will be conducted as part of theproposal process or as soon as possible after the project receives its “green light” from OSEP.The sample needs assessment presented in Exhibit 1 is for the fictional Anywhere State Speech andLanguage Pathologist (SLP) Support Project, which is designed to respond to the need for highly qualifiedSLPs who are proficient in evidence-based practices and who can work with bilingual secondary studentswith disabilities (more examples featuring the Anywhere State SLP Support project are presented inAppendix A).1See also Lammert, Heinemeier & Fiore (2013).Guidelines for Working with Third-Party Evaluators3

Exhibit 1. Sample Evaluation Needs Assessment for the Anywhere State SLP Support ProjectQuestion(1)Does your program alreadyhave an evaluation plan (adescription of theevaluation questions, datacollection tools andmethods, analysisapproach, and reportingrequirements)?Note: Very often projects havesome or all of an evaluation plan inplace but the plan requires reviewor revision after a project is funded.(2)Are there internal staff withskills necessary to conductthe evaluation?Note: Very often projects willensure statisticians and qualitativespecialists (team members whospecialize in qualitative research)are available to work on or supportthe evaluation.Check the best option Possible Third-Party Evaluator TasksYes, there is a complete evaluation plan in place, whichresponds in full to the evaluation requirements—proceed toquestion 2. If you want to double check your answer,complete the checklist at right to identify possiblethird-party evaluator tasksCreate or review the comprehensive evaluation planORReview, develop, or refine formative evaluation questionsReview, develop, or refine summative evaluationquestionsIdentify or review data collection sourcesIdentify or review data collection instrumentsCreate/pilot test data collection instrument(s)Design data collection proceduresImplementation progress monitoringService statistics (e.g., numbers served; numbersof services provided)Fidelity of implementationOutcomes/impact dataDesign data entry/ management proceduresCreate data analysis planDesign or review evaluation budgetDesign or review report template(s)Conduct formative evaluation activitiesThere is a plan, but I’m not sure if it is complete or if itresponds to requirements in full— complete the checklistat right to identify possible third-party evaluator tasksNo— complete the checklist at right to identifypossible third-party evaluator tasksYes, internal staff are qualified for the types of evaluationrequired— check off the applicable and needed skillsbelow and proceed to question 3Formative evaluation—the evaluation will collectdata on implementation progress and provideperiodic feedback to project implementers tosupport project improvementMeasuring Fidelity of Implementation—theevaluation will collect data on implementation of thecore components of the project, measure fidelity tothe proposed theory of change, create and assignfidelity scores, and determine the level ofcomponent-level and overall fidelity ofimplementationConduct study of fidelity of implementationImplement experimental or quasi-experimental designstudy (evaluator should have advanced background andexpertise or training in sampling, research methodology)Implement non-experimental study (evaluator shouldhave basic background and expertise or training inresearch methodology)Design and implement a sampling planExperimental design—the evaluation will collectdata on individuals randomly assigned intotreatment and control groups; the evaluation willrigorously monitor treatment and control groupconditions over the duration of the projectQuasi-experimental design—the evaluation willcollect data on individuals placed into treatmentand comparison groups; the evaluation willrigorously monitor treatment and comparison groupconditions over the duration of the projectNon-experimental—the evaluation will collectdata on the treatment group; a comparison groupmay be created post hoc (the evaluation will nottrack comparison group conditions over theduration of the project)Design and implementation of a sampling plan—the evaluation will design a sample that is sufficientfor the evaluation’s approach, methodology, andanalysis framework. The evaluation will identifyhow to treat sampled data (e.g., establish sampleweights and limitations on interpretation of data, ifany.)Unsure or No — complete the checklist at right toidentify possible third-party evaluator tasksGuidelines for Working with Third-Party Evaluators4

Question(3)(4)(5) Can internal staff besufficiently allocated toperform all evaluation tasksand responsibilities?Can internal staff performall evaluation tasks andresponsibilities objectivelyand without jeopardizingthe credibility of evaluationfindings?Check the best option Yes—proceed to question 4Unsure or No — complete the checklist at right toidentify possible third-party evaluator tasksYes—proceed to item 5Unsure or No — complete the checklist at right toidentify possible third-party evaluator tasksPossible Third-Party Evaluator TasksCreate/pilot test data collection instrumentsCollect data onImplementation progressService Statistics (e.g., numbers served; numbersof services provided)Fidelity of implementationOutcomes/impactPerform data entry/managementConduct data analysisProvide performance feedback to project teamWrite reportsOther:Collect data onImplementation progressService Statistics (e.g., numbers served; numbersof services provided)Fidelity of implementationOutcomes/impactPerform data entry/managementConduct data analysisProvide performance feedback to project teamWrite reportsOther:NEEDS ASSESSMENT COMPLETEDIf the answer to all questions is “yes”, the project may not need a third-party evaluator.If the answer to one or more questions is “unsure or no”, the project may benefit from hiring a third-party evaluator to perform specific tasks, asidentified in this assessment.As illustrated in Exhibit 1, once the needs assessment is completed, grantees may find that the projectalready has qualified and available staff who can perform a number of evaluation tasks. Similarly, theneeds assessment can help the project team to identify the areas where additional support may beneeded for the evaluation. Grantees can use the items identified in the “Possible Third-Party EvaluatorTasks” column above to develop a list of third-party evaluator responsibilities and tasks.In the example presented above, the needs assessment indicated the following: The project had an evaluation plan that was submitted with its proposal. The evaluation planreceived comments from the OSEP review team and requires revisions. The plan identified several quasi-experimental elements to the summative evaluation.However, none of the internal project staff have experience in implementing quasi-experimentalstudies. The plan identified two sampling opportunities. However, none of the internal project staffhave experience in designing or implementing sampling plans. The project needed assistance collecting outcome data, especially observation data. The projectalso needed assistance with data entry, data quality reviews, and data analysis and reporting. The project’s internal staff could not provide sufficient objectivity and credibility, especially withregard to outcomes data collection, analysis, and reporting. The project needs assessment identified the following tasks that could benefit from third-partyevaluator support:o Evaluation design with specific attention to: Review, development, or refinement of evaluation questions Identification or review of data collection sources Identification or review of data collection instruments Creation/ pilot testing of data collection instrument(s)Guidelines for Working with Third-Party Evaluators5

Design of data collection procedures ( Service Statistics and Outcomes/ impactdata) Design of data entry/ management procedures Development of a data analysis frameworkGuidance and expertise in designing and implementing quasi-experimental studiesGuidance and expertise in designing and implementing sampling plansData collection (outcomes/impact data)Data entry /managementData analysisReport writing ooooooThis information can be used to create a scope of work for the third-party evaluator (and for the overallproject evaluation), prepare a Request for Proposals (RFP), define contract terms, and establish projectmanagement milestones, as discussed later in this document. A blank needs assessment form is locatedin Appendix B.1.2Benefits and limitations of working with a third-party evaluatorGrantees who work with a third-party evaluator should be aware of the potential benefits andlimitations of this working relationship, as shown in Exhibit 2. Benefits include the needed skills orobjectivity brought to the project by the third-party evaluator while limitations refer to the, oftenunforeseen or unplanned, tasks or costs associated with monitoring and managing the work of the thirdparty evaluator.Exhibit 2. Benefits and Limitations of Working with a Third-Party EvaluatorBenefitsThird-party evaluators can: Bring technical expertise in research methodology,statistics, or related topics to the project team Provide credibility and objectivity by acting as anexternal “critical friend” Take on responsibility for completing some or all ofthe (formative and summative) evaluation tasks,allowing project staff to focus on projectimplementationLimitationsThird-party evaluators may: Add unanticipated or additional cost to the project Add to project monitoring and management tasksfocused on the work of contractors Not know the project background or content areaas well as project staff Be less available or accessible, as compared toproject staffIt is important to keep in mind that even when the third-party evaluator has a significant role in theproject, the Project Director (or Principal Investigator) bears ultimate responsibility for ensuring that theproject and its evaluation are carried out as planned and that all OSEP project implementation andreporting requirements are met.Guidelines for Working with Third-Party Evaluators6

1.3Determining when to bring a third-party evaluator on boardThe decision of when to hire the third-party evaluator affects what the evaluator can and cannotprovide to the project. If the grant

to the third-party evaluator may vary considerably from one project to the next. Throughout, we provide ideas, tips, strategies, and suggestions grantees may find useful to make the most of an evaluation that incorporates a third-party evaluator. The document presents a discussion of the . Guid

Related Documents:

Bruksanvisning för bilstereo . Bruksanvisning for bilstereo . Instrukcja obsługi samochodowego odtwarzacza stereo . Operating Instructions for Car Stereo . 610-104 . SV . Bruksanvisning i original

10 tips och tricks för att lyckas med ert sap-projekt 20 SAPSANYTT 2/2015 De flesta projektledare känner säkert till Cobb’s paradox. Martin Cobb verkade som CIO för sekretariatet för Treasury Board of Canada 1995 då han ställde frågan

service i Norge och Finland drivs inom ramen för ett enskilt företag (NRK. 1 och Yleisradio), fin ns det i Sverige tre: Ett för tv (Sveriges Television , SVT ), ett för radio (Sveriges Radio , SR ) och ett för utbildnings program (Sveriges Utbildningsradio, UR, vilket till följd av sin begränsade storlek inte återfinns bland de 25 största

Hotell För hotell anges de tre klasserna A/B, C och D. Det betyder att den "normala" standarden C är acceptabel men att motiven för en högre standard är starka. Ljudklass C motsvarar de tidigare normkraven för hotell, ljudklass A/B motsvarar kraven för moderna hotell med hög standard och ljudklass D kan användas vid

LÄS NOGGRANT FÖLJANDE VILLKOR FÖR APPLE DEVELOPER PROGRAM LICENCE . Apple Developer Program License Agreement Syfte Du vill använda Apple-mjukvara (enligt definitionen nedan) för att utveckla en eller flera Applikationer (enligt definitionen nedan) för Apple-märkta produkter. . Applikationer som utvecklas för iOS-produkter, Apple .

och krav. Maskinerna skriver ut upp till fyra tum breda etiketter med direkt termoteknik och termotransferteknik och är lämpliga för en lång rad användningsområden på vertikala marknader. TD-seriens professionella etikettskrivare för . skrivbordet. Brothers nya avancerade 4-tums etikettskrivare för skrivbordet är effektiva och enkla att

Den kanadensiska språkvetaren Jim Cummins har visat i sin forskning från år 1979 att det kan ta 1 till 3 år för att lära sig ett vardagsspråk och mellan 5 till 7 år för att behärska ett akademiskt språk.4 Han införde två begrepp för att beskriva elevernas språkliga kompetens: BI

**Godkänd av MAN för upp till 120 000 km och Mercedes Benz, Volvo och Renault för upp till 100 000 km i enlighet med deras specifikationer. Faktiskt oljebyte beror på motortyp, körförhållanden, servicehistorik, OBD och bränslekvalitet. Se alltid tillverkarens instruktionsbok. Art.Nr. 159CAC Art.Nr. 159CAA Art.Nr. 159CAB Art.Nr. 217B1B