EVALUATION USAID EVALUATION Learning From POLICY Experience

3y ago
29 Views
2 Downloads
283.97 KB
20 Pages
Last View : 1d ago
Last Download : 3m ago
Upload by : Jamie Paz
Transcription

EVALUATIONLearning fromExperienceUSAIDEVALUATIONPOLICYJANUARY 2011UPDATED OCTOBER 2016

USAIDEVALUATIONPOLICYJANUARY 2011UPDATED OCTOBER 2016WASHINGTON, DC

PREFACEUSAID EVALUATION POLICYPREFACEThis policy update is the work of USAID’s Bureau for Policy, Planning, and Learning’s Office of Learning, Evaluation, and Research(PPL/LER). This update had been made to ensure consistency with revisions to USAID’s Automated Directives System (ADS)Chapter 201 Program Cycle Operational Policy, which was released September 2016. The ADS revisions changed evaluationrequirements to simplify implementation and increase the breadth of evaluation coverage. The ADS revisions also seek tostrengthen evaluation dissemination and utilization, which were challenges that the 2011 version of the Evaluation Policy hademphasized.

USAID EVALUATION POLICYACKNOWLEDGEMENTSACKNOWLEDGEMENTS2016 UPDATEThis policy update reflects the comments and feedback of numerous USAID colleagues in the field and in Washington, as well asthe broader development and evaluation community. We thank those who have generously shared their experiences andchallenges in implementing the USAID Evaluation Policy over the past five years. This update also relies heavily on the pioneeringwork of the Evaluation Policy Task Team responsible for the initial development of the USAID Evaluation Policy in 2011.– October 2016

USAID EVALUATION POLICY1 CONTEXT1 CONTEXTUSAID stewards public resources to promote sustainabledevelopment in countries around the world. Reflecting theintent of the authorizing legislation of the U.S. Agency forInternational Development (the Foreign Assistance Act of1961, as amended) and embodying the aims of the currentNational Security Strategy, the Presidential Policy Directiveon Global Development, and the Quadrennial Diplomacyand Development Review, USAID pursues this goal througheffective partnerships across the U.S. Government, withpartner governments and civil society organizations, and withthe broader community of donor and technical agencies.The Agency applies the Paris Declaration principles ofownership, alignment, harmonization, managing for results,and mutual accountability.To fulfill its responsibilities, USAID bases policy and investmentdecisions on the best available empirical evidence, and usesthe opportunities afforded by project implementation togenerate new knowledge for the wider community. Moreover,USAID commits to measuring and documenting projectachievements and shortcomings so that the Agency’smultiple stakeholders gain an understanding of the return oninvestment in development activities.USAID recognizes that evaluation, defined in Box 1, is themeans through which it can obtain systematic, meaningfulfeedback about the successes and shortcomings of itsendeavors. Evaluation provides the information and analysisthat prevents mistakes from being repeated, and thatincreases the chance that future investments will yield evenmore benefits than past investments. While it must beembedded within a context that permits evidence-baseddecision-making, and rewards learning and candor morethan superficial success stories, the practice of evaluation isfundamental to the Agency’s future strength.This policy builds on the Agency’s long and innovative historyof evaluation, and seeks to redress the decline in the quantityand quality of evaluation practice within the Agency in therecent past. The number of evaluations submitted to USAID’sDevelopment Experience Clearinghouse (DEC) decreasedfrom nearly 500 in 1994 to approximately 170 in 2009,despite an almost three-fold increase in program dollarsmanaged. Over that period, the Agency’s evaluation activitieshad been subject to both internal and external critiquesregarding methodological quality, objectivity, access toevaluation findings, and use of evaluation recommendationsfor decision-making.Since the 2011 release of the Evaluation Policy, USAID hasimproved both the quantity and quality of its evaluations, toinform development programming that ultimately achievesbetter results. The number of commissioned evaluations hasrebounded from an annual average of about 130 in the fiveyears prior to the 2011 Evaluation Policy, to an annual averageof about 230 over the last five years. The Agency now offersclassroom training in evaluation as well as a number ofprocesses and resources to improve the methodologicalquality, objectivity, access to evaluation findings, and use ofevaluation conclusions for decision-making. A 2013 PPLcommissioned study showed some quality improvementsand a 2016 study showed that USAID’s overall evaluationutilization is strong, with 71 percent of evaluations being usedto support and/or modify Agency activities on the ground.While these trends are encouraging, there are still many areasfor improvement, and USAID will continue striving to improvethe quality and utilization of its evaluations.This policy responds to today’s needs. High expectationsexist for respectful relationships among donors, partnergovernments, and beneficiaries. Many stakeholders are1

1 CONTEXTdemanding greater transparency in decision-making anddisclosure of information. Development activities encompassnot only the traditional long-term investments in developmentthrough the creation of infrastructure, public sector capacity,and human capital, but also shorter-term interventionsto support and reinforce stabilization in environmentsfacing complex threats. All of these features of the currentcontext inform a policy that establishes higher standards forevaluation practice, while recognizing the need for a diverseset of approaches.This policy is intended to provide clarity to USAID staff,partners, and stakeholders about the purposes of evaluation,the types of evaluations that are required and recommended,and the approach for conducting, disseminating, and usingevaluations. Intended primarily to guide staff decisions regardingthe practice of evaluation within projects managed by USAID,it also serves to communicate to implementing partners andkey stakeholders USAID’s approach to evaluation.2USAID EVALUATION POLICYThis policy draws in significant ways on the evaluation principlesand guidance developed by the Organization for EconomicCooperation and Development (OECD) DevelopmentAssistance Committee (DAC) Evaluation Network. Inaddition, the policy is consistent with the Department of StateEvaluation Policy, and USAID will work collaboratively withthe Department of State Bureau of Resource Management toensure that the organizations’ guidelines and procedures withrespect to evaluation are mutually reinforcing. USAID also willwork closely with the Department of State’s Office of theDirector of U.S. Foreign Assistance in its efforts to strengthenand support sound evaluation policies, procedures, standards,and practices for evaluation of foreign assistance programs.Finally, this policy helps to implement the Foreign AidTransparency and Accountability Act of 2016 for USAIDand works in concert with existing Agency policies, strategies,and operational guidance, including those regarding projectdesign, evaluation-related competencies of staff, performancemonitoring, knowledge management, and research management.The policy is operationalized in USAID’s AutomatedDirectives System (ADS) Chapter 201 Program CycleOperational Policy.

USAID EVALUATION POLICY1 CONTEXTBOX 1:CONCEPTS AND CONSISTENT TERMINOLOGYTo ensure consistency in the use of key concepts, the terms and classifications highlighted below will be used by USAIDstaff and those engaged in USAID evaluations.Evaluation is the systematic collection and analysis of information about the characteristics and outcomes of strategies,projects, and activities as a basis for judgments to improve effectiveness, and timed to inform decisions about current andfuture programming. Evaluation is distinct from assessment or an informal review of projects. Impact evaluations measure the change in a development outcome that is attributable to a defined intervention;impact evaluations are based on models of cause and effect and require a credible and rigorously definedcounterfactual to control for factors other than the intervention that might account for the observed change. Impactevaluations in which comparisons are made between beneficiaries that are randomly assigned to either a treatmentor a control group provide the strongest evidence of a relationship between the intervention under study and theoutcome measured. Performance evaluations encompass a broad range of evaluation methods. They often incorporate before-aftercomparisons, but generally lack a rigorously defined counterfactual. Performance evaluations may address descriptive,normative, and/or cause-and-effect questions: what a particular project or program has achieved (at any point duringor after implementation); how it is being implemented; how it is perceived and valued; whether expected results areoccurring; and other questions that are pertinent to design, management, and operational decision-making. Performance monitoring is the ongoing and systematic collection of performance indicator data and other quantitativeor qualitative information to reveal whether implementation is on track and whether expected results are beingachieved. Performance monitoring includes monitoring of outputs and project and strategic outcomes. Performance indicators measure expected outputs and outcomes of strategies, projects, or activities based on amission’s Results Framework or a project’s or activity’s logic model. In general, outputs are directly attributable to theprogram activities, while project outcomes represent results to which a given program contributes but for which it isnot solely responsible. Performance management is the systematic process of planning, collecting, analyzing, and using performancemonitoring data and evaluations to track progress, influence decision-making, and improve results. Performancemanagement is one aspect of the larger process of continuous learning and adaptive management.NOTE: In referring to projects throughout the document, the term is used to mean a set of complementary activities,over an established timeline and budget, intended to achieve a discrete development result. The term project does notrefer only or primarily to an implementing mechanism, such as a contract or grant.3

2 PURPOSES OF EVALUATION2PURPOSES OFEVALUATIONEvaluation in USAID has two primary purposes: accountabilityto stakeholders and learning to improve developmentoutcomes.ACCOUNTABILITY: Measuring project effectiveness,relevance, and efficiency, disclosing those findings tostakeholders, and using evaluation findings to inform resourceallocation and other decisions is a core responsibility of apublicly financed entity. For evaluation to serve the aim ofaccountability, metrics should be matched to meaningfuloutputs and outcomes that are under the control or sphereof influence of the Agency. Accountability also requirescomparing performance to ex ante commitments and targets,using methods that obtain internal validity of measurement,ensuring credibility of analysis, and disclosing findings to abroad range of stakeholders, including the American public.LEARNING: Evaluations of country and regional strategies,projects, and activities that are well designed and executed4USAID EVALUATION POLICYcan systematically generate knowledge about the magnitudeand determinants of performance, permitting those whodesign and implement them—including USAID staff, hostgovernments, and a wide range of partners—to refine designsand introduce improvements into future efforts. Learningrequires: careful selection of evaluation questions to testfundamental assumptions underlying strategies and projectdesigns; methods that generate findings that are internallyand externally valid (including clustering evaluations aroundpriority thematic questions); and systems to share findingswidely and facilitate integration of the evaluation conclusionsto recommendations into decision-making.These two purposes can be achieved simultaneously andspan all projects. However, neither of these purposes can beachieved solely through the evaluation function. Each requiresintentional actions by senior management to foster a cultureof accountability and learning, and to provide appropriateincentives (and minimize disincentives) for staff at all levels.

USAID EVALUATION POLICY33 BASIC ORGANIZATIONAL ROLES AND RESPONSIBILITIESBASIC ORGANIZATIONALROLES AND RESPONSIBILITIESEach of the Agency’s operating units that implementdevelopment projects will comply with this policy,supported by a set of central functions. Operating units will: Identify an evaluation point of contact. This individual willbe responsible for ensuring compliance with the policyacross the breadth of the operating unit’s projects, and willinteract with the regional and technical bureau points ofcontact and PPL/LER. The time allocated to this functionshould be commensurate with the size of the evaluationportfolio being managed. Invest in training of key staff in evaluation managementand methods through Agency courses and/or externalopportunities. Actively encourage staff to participate in relevant evaluationcommunities of practice for knowledge exchange. Develop, as needed, the guidance, tools, and contractualmechanisms to access technical support specific to thetypes of evaluations required for the country, region, ortopical area in the domain of the operating unit. In general,this will require collaboration between the Program andTechnical Offices. USAID missions will prepare a MissionOrder on evaluation describing the context-specificapproaches and expectations regarding evaluation.1 Prepare on a yearly basis an inventory of evaluations tobe undertaken during the following fiscal year, as wellas those completed. In general, the evaluations will beidentified in Performance Management Plans (PMP). Theinformation will be included in the Evaluation Registry.Evaluation Registry guidance will indicate the specificinformation to be supplied.at least 3 percent of the program budget managed by anoperating unit should be dedicated to external evaluation. Ensure that final statements of work for external evaluationsadhere to the standards described below (See Section 4).In general, this will require collaboration between theProgram and Technical Offices. The Program Office mayengage the regional and technical bureaus in reviews ofevaluation statements of work. In missions, the ProgramOffice will manage the contract or grant relationship withthe external evaluation team or consultant except in unusualcircumstances, as determined by the mission director. Ensure, through the Program Office, that evaluationdraft reports are assessed for quality by management andthrough an in-house peer technical review, and thatcomments are provided to the evaluation teams. Ensure, through the Program Office, that plans fordissemination and use of evaluations are developed andthat evaluation final reports and their summaries aresubmitted within three months of completion to theDevelopment Experience Clearinghouse (DEC) athttp://dec.usaid.gov. Ensure, through the Program Office, that evaluationdatasets are submitted to the Development Data Library. Develop a post-evaluation action plan upon completionof an evaluation and integrate evaluation findings intodecision making about strategies, program priorities, andproject design. In general, the Program Office will takeresponsibility for this function. Participate, where relevant, in the Agency-wide process ofdeveloping an evaluation agenda. Develop, through the Program Office (as defined inADS 100), a budget estimate for the evaluations to beundertaken during the following fiscal year. On average,1 An external evaluation is one that is commissioned by USAID, rather than by the implementingpartner, and in which the team leader is an expert external to USAID, who has no fiduciaryrelationship with the implementing partner.5

3 BASIC ORGANIZATIONAL ROLES AND RESPONSIBILITIESEach of the technical and regional bureaus will: Identify an evaluation point of contact. This individual willbe responsible for ensuring compliance with the policyacross the breadth of the operating unit’s projects, and willinteract with PPL/LER. The time allocated to this functionshould be commensurate with the size of the evaluationportfolio being managed. Invest in training of key staff in evaluation management andmethods through Agency courses and/or externalopportunities. Participate in an evaluation community of practice forknowledge exchange. Organize, on request of the mission Program Offices,reviews of evaluation statements of work and draftevaluation reports. Participate in the Agency-wide process of developing anevaluation agenda.USAID EVALUATION POLICY Respond on a priority basis with technical input forevaluation design and implementation, particularly forPresidential Initiatives and large country programs. Thisincludes providing input into the requests for proposals formechanisms to access technical support for evaluations. At any time, and particularly when requested by theAdministrator, undertake or require a performanceand/or impact evaluation of any project within theUSAID portfolio. Undertake occasional thematic or meta-evaluations togenerate recommendations regarding Agency priorities,policies, and practices. These evaluations will adhere to thestandards described below. Undertake occasional post-implementation evaluations toexamine long-term effects of projects. Provide clearance on principled exceptions to therequirement of public disclosure of evaluation findings. Lead the preparation of an Agency-wide evaluationPPL/LER is an institutional source of guidance, support, andquality assurance for the design, conduct, dissemination,and synthesis of evaluations. PPL/LER will: Develop training curricula and evaluation tools that havewide application across the Agency’s portfolio. Identifyopportunities for external training in specialized topics. Organize and lead the Evaluation Interest Group and othercross-Agency evaluation-related knowledge networks. Develop and/or update, with the Office of Human Capitaland Talent Management, capabilities statements forevaluation specialists and senior evaluation specialists. Organize technical resources for evaluation that can beaccessed through a flexible mechanism. This includes,among other services: developing appropriate technicalspecifications for competitively procured evaluationexpertise, reviewing, and approving evaluation statementsof work, coordinating access to evaluation services, andproviding estimates of evaluation costs.6agenda. Broad input from across the Agency, and fromexternal stakeholders, will be sought during this process. Prepare a periodic report highlighting recent keyevaluation practices and findings, and changes andchallenges in evaluation practice. Information for this willcome from the Evaluation Registry, among other sources. Serve as the main point of contact on evaluation withdomestic and international agencies and donors, nongovernmental organizations, foundations, academicinstitutions, multilateral organizations, and localgovernments and organizations in the countries whereUSAID works. Participate with other development actors, includingpartner countries, implementing partners, and otherUSAID and U.S. Government entities, in joint crosscutting evaluations.

USAID EVALUATION POLICY44 EVALUATION PRACTICESEVALUATIONPRACTICESEvaluations at USAID should be:INTEGRATED INTO DESIGN OF STRATEGIES,PROJECTS, AND ACTIVITIESUSAID’s renewed focus on evaluation has a complementary andreinforcing relationship with other efforts to focus projects andactivities on achieving measurable results. These include a revivalof project design capacity and strengthening the disciplinaryexpertise in priority ar

This policy update is the work of USAID’s Bureau for Policy, Planning, and Learning’s Office of Learning, Evaluation, and Research (PPL/LER). This update had been made to ensure consistency with revisions to USAID’s Automated Directives System (ADS) Chapter 201 Program Cycle Operational Policy, which was released September 2016.

Related Documents:

As we learned from an external evaluation of the quality of USAID’s evaluation reports (USAID Forward evaluations as well as others), there have been clear improvements in quality between 2009 and 2012, i.e., before and after the USAID Evaluation Policy was issued in 2011. USAID’s technical and regional bureaus are energized and are rigorously

Opening Doors: A Performance Evaluation of the Development Credit Authority (DCA) in Ethiopia Wolday Amha, Consultant William M. Butterfield, Mission Economist, USAID/Ethiopia Fasika Jiffar, Senior SME Development Specialist, USAID/Ethiopia Leila Ahlstrom, Financial Management Specialist, USAID/DCA USAID/Ethiopia May 2016, Addis Ababa Cre dit .

USAID U.S. Agency for International Development WHO World Health Organization WV World Vision . 1 . Executive Summary . This performance evaluation assessed the USAID Global Health Ebola Team’s (GHET) survivor-specific programs in Liberia, Guinea, and Sierra Leone. The evaluation explored the achievement of several

USAID U.S. Agency for International Development UVG Universidad del Valle de Guatemala . USAID.GOV LRCP MIDTERM PERFORMANCE EVALUATION . LRCP MIDTERM PERFORMANCE EVALUATION USAID.GOV Awareness of and demand for evidence-based EGL information remain relatively low in the LAC region. The monitoring and evaluation (M&E) and KII/FGD data reflect .

The deadline to submit proposals is May 2nd, 2016 at 5 pm. Questions about this APS must be sent via email to ASHAapplications@usaid.gov. 2016 USAID/ASHA Annual Partners Conference Registration Is Open We are pleased to announce the 2016 USAID/ASHA Annual Partners Conference will be held March 29-30, 2016 (Tuesday-Wednesday) at the Crystal .

This Graphic Standards Manual replaces and updates the guidance released in 2005. It provides instructions on how to best utilize our brand to communicate . USAID Standard Graphic Identity (hereinafter referred to as "USAID logo") builds upon the recognition and brand-equity developed over more than 65 years of U.S. foreign aid. The USAID .

The United States Agency for International Development (USAID)/Yemen contracted Banyan Global . Gender Integration Technical Assistance Task Order to undertake a countrywide Gender Analysis to inform USAID's 2017-2020 Yemen Programming Approach (YPA). USAID/Yemen . with the role of women in political leadership largely

First course (on tables) Breads/rolls of many types (white, sour, rye, sesame, olive/caper, Italian season) Flavoured butters (honey, garlic, italian others .) Preserves (apple, pear, blackberry, salal) Two scalded milk cheese, one sweet, one savory Stout/Portwine cheese fondue Then: Soups/Stews - one beef/barley, one borshch and one bean pottage 2nd course Salmon Pie (head table gets .