MONITORING AND EVALUATION PLATFORMS - USAID Learning Lab

1y ago
9 Views
2 Downloads
541.88 KB
22 Pages
Last View : 28d ago
Last Download : 3m ago
Upload by : Wade Mabry
Transcription

MONITORING AND EVALUATIONPLATFORMSConsiderations for Design and Implementation Based on a Survey ofCurrent PracticesDISCUSSION NOTEPUBLIC VERSIONSeptember 2013Bureau for Policy, Planning and LearningOffice of Learning, Evaluation & Research

ContentsEXECUTIVE SUMMARY . 1I. PURPOSE AND METHODOLOGY . 2II. OVERVIEW OF MISSION M&E PLATFORMS . 3III. WHY M&E PLATFORMS? . 5IV. DESIGINING AN M&E PLATFORM . 6V. CONTRACTING AN M&E PLATFORM.11VI. IMPLEMENTATION CONCERNS .14VII. CONCLUSION .15

EXECUTIVE SUMMARYMonitoring and Evaluation (M&E) Platform mechanisms have become increasingly popularacross USAID missions. This report is designed to assist missions on issues to consider indeveloping and managing M&E Platforms based on recent experience from across the Agency.The term “Monitoring and Evaluation Platform” is used to describe a variety of missionmechanisms to support monitoring, evaluation, data management, and other performancemanagement and learning tasks for USAID missions.Data for this report is based on key informant interviews with USAID staff and a review ofcontract documents for M&E Platforms, including Statements of Work (SOW). In total, 23current, recently completed, or planned M&E Platforms were identified across 20 separatemissions. Mission respondents identified the following reasons for choosing an M&E Platform toaddress mission M&E needs:1. Speed, i.e., the possibility of quickly fielding M&E tasks;2. Implementer continuity from a single contractor completing mission M&E tasks;3. Limited staff resources to do M&E or to contract each M&E task separately;4. Access to technical M&E expertise that is not available in the mission;5. The ability to execute a variety of M&E related functions in addition to M&E.All of the twenty-three M&E Platforms addressed in this report include, at minimum, evaluationor monitoring tasks. A majority of these M&E Platforms included both. In addition to monitoringand evaluation functions, other functions frequently included in these mechanisms are: monitoring and evaluation data management; conducting other analyses and assessments; monitoring and evaluation capacity building; and various other functions, such as strategic communications and learning functions.Overall, key informants were satisfied with their M&E Platform and at least half wouldrecommend their M&E Platform to other missions. Satisfaction with “speed and ease ofprocurement” of the M&E Platform itself received the lowest levels of satisfaction among thequestions asked.Issues for missions to consider in designing the M&E Platform include how many and whichM&E functions to include in the M&E platform, how much flexibility is needed, and whether theplatform should cover specific sectors or the entire mission portfolio. These design issues affectfurther decisions to be made regarding contracting the M&E Platform, including what type ofcontract to use; how to fund the M&E Platform; and the entity to contract with – small/large,US/local, one contractor/multiple contractors. Finally, respondents noted a variety of issues thatcould affect M&E Platform implementation, such as the time and technical skills needed tomanage the contract, tensions between program offices and technical offices, and capacity ofM&E Platform implementers.M&E Platforms that are appropriately designed and well managed can be a successful tool formeeting mission M&E needs and contributing to a changed culture that values evidence baseddecision making and learning. Details about each M&E Platform addressed in this report can befound in the Annexes. SOWs for many of these are available to USAID staff on ProgramNet.1

I. PURPOSE AND METHODOLOGYMonitoring and Evaluation (M&E) Platform mechanisms have become increasingly popularacross USAID missions. This report is designed to assist missions on issues to consider indeveloping and managing M&E Platforms based on recent experience from across the Agency.1The report is organized as follows:1. The rest of Section one defines “Monitoring and Evaluation Platform” and describes themethodology for this report.2. Section two provides an overview of the M&E Platforms discussed in this report.3. Section three briefly describes the motivations and potential benefits for creating anM&E Platform.4. Section four discusses design issues to consider when developing an M&E PlatformStatement of Work (SOW).5. Section five discusses issues related to contracting the M&E Platform.6. Section six discusses implementation hurdles experienced by missions with M&EPlatforms.Defining “Monitoring and Evaluation Platform”The term “Monitoring and Evaluation Platform” is used to describe a variety of missionmechanisms to support monitoring, evaluation, data management, and other performancemanagement and learning tasks for USAID missions. For the purpose of this report, aMonitoring and Evaluation Platform is defined as an implementing mechanism whichgives a mission or other operating unit access to technical and advisory services todesign and carry out multiple, third-party, monitoring and evaluation tasks.2 These M&EPlatform mechanisms may often include additional tasks related to monitoring and evaluation,such as management information system development, M&E training, Geographic InformationSystem (GIS) services, strategic communications, assessments, and learning activities.Platforms that include only such services, but not monitoring or evaluation services, were notconsidered M&E Platforms and were not included in this report. Moreover, only thosemechanisms that conduct monitoring and evaluation tasks over a minimum of two years andacross multiple sectors or offices were considered M&E Platforms and were included in thisreport. However, some information was gathered on a few sector/office specific M&Emechanisms and M&E mechanisms of less than one year in duration. Information about suchmechanisms are noted where appropriate.MethodologyData for this report is based on key informant interviews with USAID staff and a review ofcontract documents for M&E Platforms, including Statements of Works. To understand whichmissions currently have, previously had, or are planning to have M&E Platforms, PPL reachedout to USAID/W regional bureaus and missions through formal and informal communications.Key informants mostly self-identified through ProgramNet or through PPL outreach. In total,representatives of 35 missions were contacted with requests for an interview. Of these,representatives of 22 missions responded (63% response rate).One to three informants per mission were interviewed in each of the 22 missions. In total, 32individual and group interviews were conducted. Most of the interviewees were based in missionProgram Offices, with an even split between Foreign Service Officers (FSOs)/ Personal ServicePPL thanks Jindra Cekan and Amy Stenoien for their data collection and analysis contributions to this report.For this report, “Third-party” means that the implementer of the M&E platform is monitoring and evaluating theactivities and expected results of other USAID implementers; mechanisms that monitor or evaluate an implementer’sown activities or expected results were not included. “Multiple monitoring and evaluation tasks” means that themechanism should include at least two evaluations or the monitoring of multiple projects or activities. Mechanisms forconducting a single-evaluation or for monitoring a single project or activity are not included in this definition.122

Contractors (PSCs) and Foreign Service Nationals (FSNs). Mission respondents wereinterviewed based on a standard questionnaire that included structured questions regardingsatisfaction with the M&E Platform and open-ended questions regarding the design,implementation, and use of data from M&E Platforms in their mission. Interviews wereconducted by telephone or in-person for a duration of 30 minutes to over 1.5 hours.Finally, mission representatives were asked to send the Statements of Work (SOW) and/orcontracts of their M&E Platform for further analysis. Copies are available to USAID staff onProgramNet. Missions with multiple active, recently completed, or planned M&E Platforms wereasked to send documentation about each of these contracts.II. OVERVIEW OF MISSION M&E PLATFORMSIn total, 23 current, recently completed, or planned M&E Platforms were identified across 20separate missions (see Table 1). Sixteen of these missions had a current or recently completedM&E Platform, while seven missions were in the process of procuring an M&E Platform as ofMay 2013.3 (Three missions have a recent M&E Platform and are also in the process ofprocuring a new M&E Platform.) Four of these missions - Afghanistan, Iraq, Pakistan, andUganda - had previous M&E mechanisms that were completed prior to 2012 and are notdiscussed in this report.Annex 1 provides contract details on each of the 16 current or recently completed M&E Platformmechanisms. In summary: 6 were Cost Reimbursement contracts 5 were task orders under PPL/LER’s Evaluation Services IDIQC mechanism 3 were Indefinite Delivery Indefinite Quantity Contracts (IDIQC) 2 were MOBIS task ordersDuration of the contracts varied from 1 to 5 years, although most (12 of 16 M&E Platforms)allowed 4 to 5 years of services if all option years are counted.Annex 2 summarizes the functions of both recent and planned M&E Platform mechanisms.Figure 1 provides a breakdown of the various functions across all missions. All of the 23 M&EPlatforms include, at minimum, evaluation or monitoring. A majority of these M&E Platformsincluded both. In addition to monitoring and evaluation functions, other functions that werefrequently included in these mechanisms are: M&E data management, such as aggregating monitoring data and preparing it forreports and reviews, Conducting other analysis and assessments; Monitoring and evaluation capacity building for USAID staff, USAID implementers, orthird parties; and Other functions, such as strategic communications, support for learning and adapting,and business process reengineering.Further discussion of these functions is found in section four.Current or recently completed M&E Platforms include those Platforms that were active anytime in 2012 or fromJanuary to May 2013.33

Table 1: Recent and Planned M&E Platform MechanismsMissionContract NameCurrent or Recently Completed ContractsServices under Program and Project Offices for Results Tracking (SUPPORT), Phase IIMonitoring and Evaluation ProgramMonitoring and Evaluation PlatformPerformance Management SystemMission Evaluation MechanismEvaluation Services for USAID/IndiaPerformance Evaluation and Reporting for Results ManagementProgram support services contractIndependent Monitoring and Evaluation ContractEvaluation ServicesEvaluation and Survey ServicesInternal Monitoring and Evaluation Systems and ReportingMonitoring and Evaluation Support ProjectMonitoring and Evaluation ProjectMonitoring and Evaluation Management Services IIMonitoring and Evaluation TasksIn Planning as of May 201317 Colombia*Evaluation and Analysis for Learning (EVAL) Project18 EgyptServices to Improve Performance Management, Enhance Learning and Evaluation19 Iraq*†Advancing Performance Management20 JordanMonitoring and Evaluation Support ProjectPMP Development, Monitoring and Evaluation Capacity Building Services, Evaluations,21 NepalAssessments and Analyses22 RDMAUnnamed23 Uganda*†Monitoring, Evaluation and Learning Program* Missions with a current or recently completed M&E Platform and in planning for a new M&E Platform.† Missions with a previous M&E Platform completed prior to aCongo eruRussiaRwandaSo. SudanSomaliaUganda*†YemenAnnex 3 provides summary information on mission satisfaction with their M&E Platform. As partof the interviews with mission informants, each respondent was asked a standard set ofstructured questions rating their current or past mechanisms on a 1-5 scale (where 1 “Does notmeet Mission needs” and 5 “Meets Mission needs well”) on the following topics: Ease of procurement, Integration with other M&E mechanisms, Ease of management, Cost- and time-effectiveness, Quality of monitoring and of evaluation deliverablesRespondents were also asked to rate their current or past mechanisms regarding whether theywould recommend their mechanism to other missions on a 1 to 5 scale (where 1 notrecommend and 5 highly recommend). In those missions where multiple respondents wereinterviewed, scores were averaged across all respondents.Overall, missions were satisfied with their M&E Platforms (scoring them between 3 and 4 onmost measures). On the question of whether respondents would recommend their M&EPlatform to other missions, half of the M&E Platforms (8 of 16) would be recommended tocolleagues (scoring 4 or higher), while only two M&E Platforms would not be recommended(scoring lower than 3).4

III. WHY M&E PLATFORMS?The introduction of the USAID Evaluation Policy in January 2011, along with related changes tothe USAID ADS on strategy, project design, and monitoring (2012), has increased missionworkload and the need for technical expertise in the area of monitoring and evaluation. Whilethese changes in monitoring and evaluation at USAID affect all missions, responses havediffered from one mission to the next.Missions have a variety of options for conducting the monitoring and evaluation of their projectsand activities. In addition to M&E Platforms described in this report, missions may choose toconduct some of these tasks with existing personnel or use implementing partners to collectmonitoring data and evaluate their own projects. In some cases, though, such as for projectsthat are required to be evaluated, Missions must use third-party evaluators. Even in theseinstances, though, missions could choose to contract for “one-off” evaluations either throughtheir own mechanism or a Washington-based mechanism rather than develop a multi-year M&EPlatform.So why choose to contract for an M&E Platform? Mission respondents mentioned the following:1.Speed: M&E Platforms offer the possibility of quickly fielding M&E tasks. In South Sudan,for instance, the mission respondent said that they could not use PPL’s evaluation IDIQC,because the situation changes quickly and they need to task contractors at a moment’snotice.2.Implementer continuity: Respondents noted that there seem to be special benefits ofchoosing one firm to do evaluations rather than have multiple contractors bidding ondifferent evaluations. Several interviewees felt that one firm knowing the country, themission, and the partners provides continuity and builds expertise. “It takes too much timeto work with multiple contractors on aligning evaluation designs and reports to USAID’sevaluation policy requirements,” noted one respondent.3.Limited staff resources: Some mission respondents noted that busy Program Office staffmade it hard to do monitoring and evaluation themselves or to contract for each taskseparately. One Program Officer noted, “There was no way the Program Office could meet[the demands of] M&E along with all else on our plates The ADS is clear but unrealisticwith mission staff having 15 other things to work on We took on the contract to expandleverage [especially as] there is a lack of time and staff to do M&E justice.” The majorityof interview respondents reported feeling overwhelmed in much the same way as thisProgram Officer.4.Access to technical expertise: Some mission respondents noted that M&E Platformsenable access to technical expertise, for instance in the design phase of an evaluation, thatis neither available within the mission nor accessible under one-off M&E tasks.5.Multiple functions: Some mission respondents expressed enthusiasm about using theM&E Platform mechanisms to execute a multitude of services to improve the efficiency ofprograms implemented by the mission’s Technical Offices and overall mission learning.These services could include not just monitoring and evaluation, but also assessments,management information and reporting, mapping, capacity building, and learning activities.5

IV. DESIGINING AN M&E PLATFORMAs many respondents note, the quality of M&E Platform design is critical. The M&E PlatformSOW sets content and tone for the entire contract and deliverables are only as good as theSOW. Four questions to consider in designing an M&E Platform are discussed below.A. Which M&E related functions should be included in an M&E Platform?One of the most important choices to address in designing an M&E Platform concerns whichfunctions to include. Although all of the M&E Platforms examined include, at minimum,monitoring and/or evaluation functions, most of the M&E Platforms (21 out of 23) include otherfunctions as well. These functions are grouped into the six broad categories as shown in Annex2 and fifteen sub-categories as shown in Figure 1. Details about each of the categories areprovided in Annex 4.Each of these functions found in recent and planned M&E Platforms is discussed below alongwith examples and issues for consideration.Figure 1: Frequency of Functions among 23 M&E Platform toringThird Party MonitoringMacro/Portfolio level MonitoringData ManagementData Management & ReportingMIS DevelopmentGeographic Information SystemsOther AnalysisAnalysis, Assessments and PlanningEnvironment complianceM&E Capacity BuildingM&E Training for Other IPsM&E Training for USAID StaffLocal M&E Capacity BuildingOtherStrategic CommunicationBusiness Process ReingineeringLearning (for adapting)0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19Total number of M&E Platform mechanisms with any functions in this functional category.Total number of M&E Platform mechanisms with any functions in this functional sub-category.6

1.EvaluationEvaluation functions in M&E Platforms included: (1) conducting performance and impactevaluations, and (2) meta-evaluations, in which the M&E Platform contractor assesses theresults or quality of evaluations performed by other implementing partners. Nineteen out oftwenty-three M&E Platforms included evaluation functions.Considerations:Technical help in designing evaluations. A few mission and PPL respondents noted thebenefit of including evaluations in M&E Platforms because they allow the Program Office towork with evaluators from the M&E Platform contractor to develop reasonable questions,budgets, and methods, which is especially important for complex evaluations, such asimpact evaluations. This contrasts with the typical process of contracting individualevaluations which require the Program Office to specify evaluation questions, methods, andbudgets upfront.Caution! Some tensions exist with this approach regarding the appropriate level ofcontractor involvement in developing SOWs and the resulting budget. Consult yourContracting Officer to establish the parameters for collaboration between USAID and theM&E Platform partner on developing SOWs and refining evaluation questions andmethodologies.Meta-evaluations: Although few missions explicitly included meta-evaluations in their M&EPlatform, one mission respondent noted the value of the meta-evaluation function of theirM&E Platform. “It was [especially] useful for the mission’s CDCS process as over 100evaluations were analyzed, with [sometimes surprising results from the] synopsis of keyrecommendations that were common in almost all evaluations which included issues onhost government willingness, involvement and buy-in into projects, and sustainability ofactivities and projects.“2.MonitoringMonitoring functions in M&E Platforms included: (1) third party monitoring – the planning,collection, verification, assessment, and/or review of performance indicator data of otherimplementer’s projects, and (2) macro/portfolio level monitoring - the planning, collecting,verification, assessment, and/or review of portfolio level indicators or indicators relevant tomultiple projects and activities. Seventeen out of twenty-three M&E Platforms includedmonitoring functions.Considerations:Third Party Monitoring in Restricted Environments: Six missions (Somalia, Colombia, DRC,Yemen, Afghanistan, Rwanda and Iraq) reported that their contractors are tasked withconducting third party monitoring, including completing Data Quality Assessments (DQAs).While there are drawbacks to relying on third party monitoring, particularly for DQAs (seebelow), this practice makes sense in restricted environments and has been used well andinnovatively in some cases (see Example: Somalia, below)Caution! Outsourcing DQAs: AlthoughM&E Platforms have been used to assessthe quality of data reported byimplementing partners, missionrespondents found that the DQAsconducted by M&E Platforms were notalways sufficiently rigorous to catch theExample: Somalia. This non-presencecountry is piloting a third party monitoring andreporting tool to oversee its programs. “Weuse GIS imagery and on the ground localmonitors who can interview the population andconfirm the quality of the US investments.”7

errors in data reported by implementing partners. In the most egregious examplementioned, one mission respondent reported “shockingly bad data quality assessments”conducted by the M&E Platform. As noted in ADS 203.3.11.3, “Missions should not hire anoutside expert to assess the quality of their data.” In unrestricted environments, missionsshould therefore use caution in requesting DQAs in an M&E Platform’s SOW and institutemeans of quality control.Example: Kenya. The Kenya Mission used third party monitoring for its education and youthprograms. “We hired Kenyan college graduate, intern-level field monitors for capacity building andbecause [it is] cheaper ( 100,000 a year) than alternatives. As they are all from the area, they knewwho to speak with, where to go, and speak the language. They interviewed [participants],communities, and government staff. [The Mission] used EpiSurveyor to get real time data. It wasshared with AOR/CORs at aggregate response level, and then they could drill down if/when someresults stood out. [This approach was good because] AORs/ CORs don’t have the time to monitor arepresentative number of sites and they’re not [as] objective “3.Data ManagementData management functions in M&E Platforms included: (1) data management andreporting - aggregating data from multiple partners and projects; coordinating thecompilation and validation of data for the Operational Plan (OP)/Performance Plan andReport (PPR); producing customized information products from monitoring data, etc.; (2)Management Information System (MIS) development or modifications for indicator data;and (3) developing Geographic Information Systems (GIS). Fifteen out of twenty-three M&EPlatforms include data management functions.Considerations:Integrating data management with monitoring support: Respondents from Yemen, Nepal,Rwanda, Kenya and others stressed the need for centralized data collection and analysis.One interviewee said that having highly qualified technical help on PMP and indicatordevelopment is highly beneficial as is having the contractor manage the data platform.In Uganda, for instance, the M&E Platform contractor gave assistance to implementingpartners on data collection tools, ensured data collection methods were the same, rolled-updata collected, verified/cleaned it, ensured it was verified by CORs, and even did someanalysis plots over time (e.g. actual vs. targeted).Caution! There is currently a moratorium on developing new information systems formanaging performance monitoring data as PPL and CIO move forward on thestandardization process and roll-out of AidTracker Plus.4.Other AnalysisOther types of analysis services in M&E Platforms included: (1) analysis of secondary datasources to inform project design, sectoral or cross-cutting assessments, and situationalreports; and (2) environmental compliance. Fifteen out of twenty-three M&E Platformsinclude “other analysis.”In Uganda, for example, contractors did early analysis of monitoring data in preparation fora PPR/portfolio review which led to new program (re)designs. The contractors alsoconducted special studies that led to Ministry of Health policy reforms.5.M&E Capacity BuildingM&E capacity building functions in M&E Platforms included training and guidance onmonitoring and evaluation topics for: (1) implementing partners, (2) USAID Staff, and (3)8

local evaluators and/or host country officials. Twelve out of twenty-three M&E Platformsinclude M&E capacity building functions.ConsiderationsBuilding on previous M&E training. The need for much more training of USAID staff andimplementing partners was a recurring theme. While mission respondents noted that theyhave sent their staff to M&E training which was either held in Washington or in the regionitself, additional training through an M&E Platform could also be beneficial. According toone respondent, “We had several people get M&E training [from Washington or regionalcourses] but this is separate from actually doing it back in the mission. It is essential; theyneeded practical support from experts as they applied the training.” Another respondentfocused on the need to involve implementing partners in local capacity building, “Theculture of implementers hasn’t changed, although USAID has just started to change Implementers were upset at the new USAID Evaluation Policy and resistant to change in2011.”Example: Ethiopia. Ethiopia invested much of their early M&E Platform contract work in capacitybuilding of mission staff and partners on M&E. The mission has already trained 25 mission staff inM&E (on DQAs, good SOWs in evaluations, streamlining the Results Framework, and doing the PMP)across different teams. However, the mission found this may not be enough. As one intervieweestated, “We don’t do M&E rigorously because we are always running to implement, implement andimplement.”Caution! Be sure your M&E Platform contractor understands current Program Cycleguidance before initiating capacity building efforts.6.Other FunctionsOther functions in M&E Platforms included: (1) strategic communications – producingpublications and organizing public events with partners and other stakeholders; (2) learning(for adapting) – facilitating regular meetings for compiling lessons learned and sharinginformation relevant to USAID and implementing partner staff; (3) business reengineering –meeting with USAID staff to improve M&E efforts and/or developing a mission order. Nineout of twenty-three M&E Platforms included “other” functions.Considerations“We struggled most with the Mission staff’sIntegrating learning tasks into alearning curve in learning to get to impact, such aspredominately M&E Platformcross-pollinating learning across sectors so thatmechanism: As missions begin toM&E isn’t one more thing to do. People should feelincrease their efforts to have an“I’m an expert in development and my job isintentional focus on integratinglearning and M&E too!”learning, some have chosen to support- Deputy Program Officerthese efforts through their M&EPlatform contracts. The chief advantage of including learning into the M&E Platform is thatit provides an opportunity to feed monitoring data and evaluation findings into broaderlearning processes that inform project design and implementation. An M&E Platformmechanism can support knowledge creation to fill knowledge gaps, develop a plan for fillingthe identified gaps through research and experiential knowledge sharing, facilitatecollaboration and coordination among implementing partners and other collaborators whowill have the greatest impact on the mission’s results, and – reflecting on the implications ofnew learning – plan for adapting programs accordingly.9

Caution! There may be some disadvantages to including learning tasks in an M&EPlatform, simply because M&E work is often associated strongly with “accountability.” Anaccountability focus can inhibit honest appraisals of what’s working and what is not,particularly if there isn’t an explicit acknowledgement that reality will always diverge fromeven the best plan. It may be necessary for learning-focused efforts to be supportedthrough separate resources (whether an external mechanism or dedicated USAID staff) toenable the candid reflecting and sharing required for learning.B. How many M&E related functions should be included in an M&E Platform?The discussion above presents the wide array of functions possible under an M&E Platform.Each set of functions has its proponents. Deciding whether a specific function should beincluded in an M&E Platform should not be made in isolation, but considered in light of the entirerange of services requested. As M&E Platforms evolve toward a more complex spectrum ofservices, integrating these functions into an M&E Platform provides both opportunities andchallenges.For some missions, the ability to contract multiple M&E related services in a single contractpresents a promising opportunity to improve efficiency and/or encourage learning. Egypt’s newM&E Platform is one example; it includes a host of service

Monitoring and Evaluation Platform is defined as an implementing mechanism which gives a mission or other operating unit access to technical and advisory services to design and carry out multiple, third-party, monitoring and evaluation tasks.2 These M&E Platform mechanisms may often include additional tasks related to monitoring and evaluation,

Related Documents:

As we learned from an external evaluation of the quality of USAID’s evaluation reports (USAID Forward evaluations as well as others), there have been clear improvements in quality between 2009 and 2012, i.e., before and after the USAID Evaluation Policy was issued in 2011. USAID’s technical and regional bureaus are energized and are rigorously

USAID U.S. Agency for International Development UVG Universidad del Valle de Guatemala . USAID.GOV LRCP MIDTERM PERFORMANCE EVALUATION . LRCP MIDTERM PERFORMANCE EVALUATION USAID.GOV Awareness of and demand for evidence-based EGL information remain relatively low in the LAC region. The monitoring and evaluation (M&E) and KII/FGD data reflect .

Opening Doors: A Performance Evaluation of the Development Credit Authority (DCA) in Ethiopia Wolday Amha, Consultant William M. Butterfield, Mission Economist, USAID/Ethiopia Fasika Jiffar, Senior SME Development Specialist, USAID/Ethiopia Leila Ahlstrom, Financial Management Specialist, USAID/DCA USAID/Ethiopia May 2016, Addis Ababa Cre dit .

USAID AMPR MONITORING, EVALUATION, AND LEARNING (MEL) PLAN 3 2.0 REVIEWING AND UPDATING THE MEL PLAN The Monitoring, Evaluation, and Learning (MEL) Plan serves as a tool to guide overall project performance. As such, the team will update it as necessary to reflect changes in USAID AMPR's strategy and ongoing tasks.

USAID U.S. Agency for International Development WHO World Health Organization WV World Vision . 1 . Executive Summary . This performance evaluation assessed the USAID Global Health Ebola Team’s (GHET) survivor-specific programs in Liberia, Guinea, and Sierra Leone. The evaluation explored the achievement of several

An Evaluation Framework for USAID–funded TIP Prevention and Victim Protection Programs Creative Associates International, Inc. and the Aguirre Division of JBS International 1 Executive Summary The purpose of this report is to develop an evaluation framework for USAID prevention and victim

This policy update is the work of USAID’s Bureau for Policy, Planning, and Learning’s Office of Learning, Evaluation, and Research (PPL/LER). This update had been made to ensure consistency with revisions to USAID’s Automated Directives System (ADS) Chapter 201 Program Cycle Operational Policy, which was released September 2016.

Anatomy Fig 1. Upper limb venous anatomy [1] Vessel Selection Right arm preferable to left (as the catheter is more likely to advance into the correct vessel), vessel selection in order: 1. Basilic 2. Brachial 3. Cephalic Pre-procedure Patient information and consent Purpose of procedure, risks, benefits, alternatives. Line care: Consider using local patient information leaflet as available .