MONITORING AND EVALUATION FRAMEWORK

2y ago
42 Views
4 Downloads
514.45 KB
15 Pages
Last View : 1m ago
Last Download : 3m ago
Upload by : Gia Hauser
Transcription

MONITORING ANDEVALUATIONFRAMEWORKFOR CONTINUING PROFESSIONALDEVELOPMENT2012

1.About this guide and its applicability . 22.The importance of M&E . 33.Definitions. 34.Types of Evaluations. 45.Key M&E concepts: . 56.CPD Results Framework. 56.1 What a CPD Results Framework is used for . 56.2 Characteristics of a CPD Results Framework . 67.Types of indicators, targets and means of verification . 78.Methods of evaluation. 89.Who should conduct the evaluation? . 810.Guidelines for deciding on evaluation methods . 811.Data Quality Assessment Framework: . 1012.What should the M&E Plan include? . 1213.Suggested outline for an evaluation report. 13References . 131

Monitoring and Evaluation Implementation Framework for Continuing ProfessionalDevelopment (CPD)1. About this guide and its applicabilityMonitoring and evaluation (M&E) can be effective tools to enhance the quality of projectplanning and management. Monitoring helps project managers to understand whether theprojects are progressing in schedule and to ensure that project inputs, activities, outputs andexternal factors are proceeding as planned. Evaluation can be a tool to help project managersassess to what extent the projects have achieved the objectives set forth in the project documents.This guide is a step-by-step process of using the proposed M&E Plan for CPD. It is expected thatit will be used in different ways by different program interventions and at different stages of thestrategy development and implementation process. However, it is very important to involvestakeholders and partners in each specific programme.The guide puts emphasis on the use of the Results Framework approach, which is meant to besimple and straight forward in design and, therefore, it does not require specialized skills. It is asystemic approach to documenting the logic of a strategy and its subsequent management,monitoring and performance measurement to ensure that the intended results have beenachieved.This Guide has been written for all those people who have specific yet different M&E-relatedresponsibilities and tasks within the scope of CPD in the education sector. This makes it possiblefor the users to focus on the material that is relevant for their needs at a particular point in time.The user might want to copy parts of the Guide on particular M&E functions and use it inspecific projects in CPD.2. What does the Guide do?2.1 The objectives of this Guide are to provide the reader with: A basic understanding of the purposes, processes, norms, standards and guidingprinciples for planning, monitoring and evaluation with the CPD context. Knowledge of the essential elements of the planning and monitoring and processes inCPD, i.e developing a robust results framework for projects and programmes, withclear indicators, baselines and targets; and setting up an effective monitoring system. Knowledge of the essential elements of the evaluation process in CPD: developing anevaluation plan, managing, designing and conducting quality evaluations, and usingevaluation to develop intervention programmes.2.2 To enhance the results-based culture within CPD and improve the quality of planning,monitoring and evaluation of education projects and programmes.2

3. The importance of M&EMonitoring and Evaluation is important because: it provides the only consolidated source of information showcasing project progress;it allows educators to learn from each other’s experiences, building on expertise andknowledge;it often generates (written) reports that contribute to transparency and accountability, andallows for lessons to be shared more easily;it reveals mistakes and offers paths for learning and improvements;it provides a basis for questioning and testing assumptions;it provides a means for educators seeking to learn from each other’s experiences and toincorporate them into policy and practice;it provides a way to assess the crucial link between implementers and beneficiaries on theground and decision-makers;it provides a more robust basis for raising funds and influencing policy.To ensure effective implementation and follow –up, ongoing evaluation must be built into theimplementation, based on predetermined critical success criteria for each learning initiative.Evaluation should take place at different times: (before) pre-CPD interventions (known asdiagnostic evaluation), during a CPD intervention (known as formative evaluation), and at thepost-conclusion of a learning programme (known as summative evaluation), or sometime after alearning programme (known as longitudinal evaluation). Frequent interim evaluations must beconducted in order to prevent stagnation and encourage ongoing CPD programmes. Theevaluation process should also include opportunities for revisitng the learning programmestrategy in order to effect amendments and improvements (Meyers, 2002).4. DefinitionsMonitoring is the routine checking of information on progress, so as to confirm that progress isoccurring against the defined direction. It commonly involves monthly to quarterly reporting, onoutputs, activities and use of resources (e.g. people, time, money, and materials). It should beused to ensure that what has been planned is going forward as intended and within the resourcesallocated.Evaluation is used to ensure that the direction chosen is correct, and that the right mix ofstrategies and resources were used to get there. It can typically be formative (helping to developlearning and understanding within stakeholders) or summative (i.e indicating the degree ofachievement). It typically focuses on outcomes and their relationship with outputs.3

5. Types of EvaluationsEvaluation timingFocus of evaluationDiagnostic The design of theevaluationprogrammeBefore delivery of the Existing skills levels ofCPD programmelearners as part of theCPD needs analysisFormativeevaluationDuring the CPDintervention Summative evaluationDirectly after the CPDintervention LongitudinalevaluationOn the job 3-12months aftercompletion of theCPD intervention Questions to ask Are the training and learningfacilitation methods appropriate toachieve the outcomes? Do the training methods coincidewith the learners’ preference andlearning styles? Has the learning programme beendesigned in the most efficientmanner?The quality of the Are the learners enjoying thedelivery processdelivery of the programme?The adequacy of the Are the methods being used in thelearning materialdelivery of the programme effectivein achieving the programmeThe appropriateness ofobjective and learning outcomes?the delivery methods What is the quality of the delivery ofthe learning programme? Are all the administrativearrangements running smoothly?Satisfaction of the Have the learners achieved thelearners with the learninglearning outcomes? What are theprogrammelearners assessment results?The achievement of the Was the learning programmeoutcomes by the learnerseffectively delivered?The overall effectiveness Did the learning programme achieveof the learningits overall objectives?programme What could we have donedifferently?Transfer and applicationof learning in the What needs to be changed?workplace How would we improve the learningSupport for newprogramme?knowledge, skills andattitudes in the workplaceImpact on individualperformance in theworkplaceImpact on theperformance of theeducation system4

6. Key M&E concepts:Outcomes versus output, input, activities and impactInputs: the human, financial and other resources expended in undertaking the activities.Activities: the things that need to be done to achieve outputs.Outputs: the major results needed to achieve the outcomes.Outcomes: the long term benefits, intended or unintended. These can be relatively short-term(e.g during a project life, commonly referred to them as project purpose or objective) or longterm, commonly referred to as goal or long-term objectives.Impacts: the result of achieving specific outcomes, such as improving learner performance.7. CPD Results FrameworkThe Continuing Professional Development Results Framework (CPDRF) is applied to monitorprojects/programmes during implementation (with a view to taking corrective action) or to assessthe results, or even the design of the completed projects. CPDFR addresses several longstanding criticism of capacity development work, including the lack of clear definitions, coherentconceptual frameworks, and effective monitoring of results. It also promotes a common,systematic approach to capacity development by clarifying objectives, assess prevailing capacityfactors, identify appropriate agents of change and change processes, and guide the design ofeffective learning activities. The framework addresses a gap often found between broad overallobjectives and specific learning activities. The framework requires a defined set of variables toany developmental goal in a given context, and to model explicitly the change process that isexpected to be facilitated by learning. Individuals and groups of teachers, teachersA CPDRF has two critical features, namely:Big picture perspective: a Results Framework incorporates the contribution of stakeholdersnecessary to achieve relevant goals and objectives.Cause and Effect Logic: a Results Framework outlines the development hypothesis implicit inthe strategy and the cause-and-effect linkages between the goal, strategic objectives and specificprogramme outcomes.6.1 What a CPD Results Framework is used forThe result framework is used for planning, management/monitoring/review and communication.Planning-a CPDRF is used to identify appropriate objectives by ensuring that importantquestions are asked at an early stage. It also provides a framework within which to workcollaboratively with stakeholders in order to build shared ownership of objectives andapproaches.5

Management/Monitoring/Review- a CPDRF can fill the role of a performance framework for aprogramme strategy. It provides a programme-level framework to monitor progress towardsachieving results and where necessary, to adjust programmes accordingly. In addition, theframework allows for annual reviews which are straightforward and rigorous in structure throughwhich a strategy’s performance can be tested.Communication-a CPD Results Framework can provide a strategy in one page, that gives thereader an immediate idea of what a programme is aiming to achieve.6.2 Characteristics of a CPD Results FrameworkResearch and information–a CPD Results Framework should be based on concrete informationand analysis which is well grounded in reality.An understanding of ‘cause and effect’ logic- cause and effect relationships are based on ahypothesis, and not on hard evidence. The performance data-and good strategy will provideflexibility to learn lessons and build in modifications as the strategy proceeds, as a result thestrength of the hypothesis will be borne out.An understanding of attribution-the degree of attribution progresses from Strategic objectivethrough Intermediate objective to programme outcome. At programme outcome, the attributionemphasizes the desired programme outcomes. Below is an example of the adaptation of theM&E Framework in the Namibia Novice Teachers Induction Programme.A CPDRF is depicted as follows:Objectives1. Provide thenecessarysupport to thenovice teachersPerformance IndicatorsMeans ofCritical AssumptionsVerificationSurvey, document100% of target groupanalysis, classroom attained appropriateobservation/intervie competencies & skillsws with uniqueteachers1. Orientation:1.1 number of meetingsheld, number of internalschool tours;1.2 Number of meetingsheld, number of internalcommunity tours held.2. Mentoring:2.1 Types of novice needsidentified2.2 Goals identified2.3 Mentoring Year Plannerdeveloped2.4 Types of administrativesupport identified2.5 Portfolio available2.5.1 No. of co-Planningsessions6

2.5.2 Types of feedback fromObservation2.5.3 No. of face-to-facemeetings2.5.4 No. of work sessions oncontent-specific issues2.5.5 No. of classroomobservation2.5.6 Types and number ofProfessionalDevelopment attended.2. Increase theretention ofpromisingnovice teachersNo. of novice teachers inservice.No. of novice teachers exitedteaching at intervals (2 or 5years later)Document analysis,EMIS100% novice teachersretained8. Types of indicators, targets and means of verificationIndicators are designed to measure changes over time by pointing the direction of change ineither the positive, negative, or whether the situation is improving or worsening. Indicators areusually numeric. They may contain qualitative data which is usually quantified.When numeric data are based on qualitative values, these should be applied to generatemeaningful information. It is therefore important to have qualitative indicators which can booststakeholders’ participation, given that their opinion would be required in order to produce theindicators.Indicators can have different uses depending on the type of information that needs to becollected. It is therefore important to distinguish between the different types of indicatorsaccording to their function and the type of information they refer to:Direct (e.g for programmes development-an indicator could be the number of programmescreated and sustained for at least 1 year as a direct result of a programme intervention), i.e. aclose match to an objective of programme creation; orIndirect, sometimes referred to as a proxy indicator, i.e where an indicator has to be used torepresent the objective. For example for skills developed, a proxy indicator could be the numberof internships agreed, which is not a complete indicator for skills developed (there may be othersources), but could represent at least part of the objectives;The target provides the actual number and the timescale;The baseline provides the reference point in terms of quantity, quality and time, against whichthe target can be measured.Input indicators: measure the means by which the project is implemented.7

Process indicators: measure delivery activities of the resources devoted to a program or project;monitor achievement during implementation in order to track progress towards the intendedresults.Output indicators: measure the extent to which the project delivers the intended outputs andidentify intermediate results.Impact indicators: measure the extent to which the project has the intended effects and relateddirectly to the long-term results of the project.9. Methods of evaluationThis section focuses on some of the many methods of evaluation to determine how it works,when it should be used, how to use it, and how methods can be matched to the model ofevaluation used. The techniques commonly used by the valuators are: Open –ended comments or reactionsObjective questions or surveysTask performance measures such as simulation and role playsMultiple choice or similar testsParticipant self-assessmentIt must be stressed that no method is value free or theory free. Instead, the use of method will bedetermined by the model that the valuator uses. At all times, the valuator must questionapplications and whether the method is most useful for valuating training as an activity or if itshows synergy with the CPD Results Framework.10. Who should conduct the evaluation?Members of the CPD consortium are mandated to conduct CPD evaluations. These are selectedmembers of the Faculty of Education, UNAM and members from the Directorates of Ministry ofEducation: NIED and PQA. In addition, members of the Regional Continuing ProfessionalDevelopment Coordinating Committee may also conduct evaluations at regional and nationallevels. However, the information sought during the evaluations should be forwarded to the CPDUnit for record purposes.11. Guidelines for deciding on evaluation methodsWhen doesGoal of evaluationMethods to useevaluation occur?During the training Judgement about the quality of the Sessional reactionnarieseventlearner’s experience during learning Event reactionnairesGroup discussionIndividual commentJudgement of learningWritten testsBehavioural observationRepertory grid8

Measures of change during trainingAssessment of terminal competence9Practical testsVideo-audio recordingComputer-based assessmentPre or post testsBehaviour observationPractical testsRepertory gridTestBehaviour observation

When doesevaluation occur?In the workplaceGoal of evaluationMethods to useDid training meet needs or luation interviewsApplication of learning in the Action planningworkplaceBehavioural observationCritical incident analysisEvaluation sParticipant observationPre or post sampling ofworkOrganisationalChangesinorganizational Analyses of onalmeasures such as outputquality or quantity, salesvolume, wastage, expressedcustomersatisfaction,financial measures such ascost, return on investmentCost effectiveness of trainingCostingCost/benefit analysisCost effectiveness analysisCongruence of training and Intervieworganizational visionContent analysis of formaland operative policiesSocial or culturalContribution of training to national cost-benefit analysisgoals and objectivesvalues analysisSurveysAdapted from Meyer, 2002: 33412. Data Quality Assessment FrameworkThe guide document borrows the Data Quality Assessment Framework to inform data collectionquality; data processing; and the quality of analysis and interpretation; and, disseminationprocess for education statistics at the national and regional level. In addition, qualitativeapproaches may be adopted to ensure data quality. The DQAF examines these categories throughsix dimensions and twenty one sub-dimensions (see Table below)10

Data Quality Assessment FrameworkDimensionsSub-dimensions0 Pre-requisitesLegal nt1 IntegrityProfessionalism Transparency Ethicalnorms2 Methodological Concepts andScopeClassificationsoundnessdefinitions3 Accuracy andSource isticalandtechniquesvalidation ofsource data4 ServiceabilityRelevance5 AccessibilityAccessible dataConsistentstatisticsRevisionstudiesBasis ediateresults andstatisticaloutputsTimely andRevisionregularpolicy a

Evaluation should take place at different times: (before) pre-CPD interventions (known as diagnostic evaluation), during a CPD intervention (known as formative evaluation), and at the post-conclusion of a learning programme (known as summative evaluation), or sometime after a learning progr

Related Documents:

EVALUATION PLANNING MONITORING AND EVALUATION Traditional View: * Monitoring and evaluation are clearly defined and distinct activities. * Monitoring is the collecting of regular information on inputs and outputs. * Evaluation takes place once or twice in a project's life. Current View: * Monitoring and evaluation are intimately related activities. * Monitoring includes the collection of .

evaluation of the response capacities using key performance indicators (KPIs) in ALL countries’ (op. cit., p. 9). At the global level, a recent monitoring and evaluation framework (WHO, COVID-19 Monitoring and Evaluation Framework, Update 5th June, 2020) revisited M&E in

telemetry 1.24 Service P threshold_migrator 2.11 Monitoring P tomcat 1.30 Monitoring P trellis 20.30 Service P udm_manager 20.30 Service P url_response 4.52 Monitoring P usage_metering 9.28 Monitoring vCloud 2.04 Monitoring P vmax 1.44 Monitoring P vmware 7.15 Monitoring P vnxe_monitor 1.03 Monitoring vplex 1.01 Monitoring P wasp 20.30 UMP P .

2. Principles for Monitoring and Evaluation 5 2.1 Five Principles 5 2.2 Relationship between Monitoring and Evaluation 5 3. Project Monitoring 8 3.1 Objective 8 3.2 Regular monitoring 8 3.3 Performance Items and Indicators 8 3.4 Procedures 9 3.5 Provisional Steps 11 3.6 Budget 12 4. Evaluation 13 4.1 Objective 13 4.2 Types 13

What is Media Monitoring and How Do You Use it Monitoring: a history of tracking media What is monitoring? Getting started with monitoring The Benefits and Uses of Monitoring Using media monitoring to combat information overload Tools to maximize monitoring and measurement efforts Using media monitoring to develop media lists

7.2.8 Ensure dedicated monitoring and evaluation capacity is instigated at programme and project level 41 7.2.9 Develop a monitoring and evaluation framework at programme and project level 43 7.2.10 Research and invest in information and communication technologies to support remote monitoring 45 7.2.11 Peer monitoring 48

ADRA Monitoring and Evaluation Manual 2 The M&E Manual is intended to strengthen the following principal competencies: Understanding conceptual frameworks for program design and planning upon which monitoring and evaluation systems will be based; Identifying and distinguishing between the key components of monitoring and evaluation .

Monitoring and Evaluation Platform is defined as an implementing mechanism which gives a mission or other operating unit access to technical and advisory services to design and carry out multiple, third-party, monitoring and evaluation tasks.2 These M&E Platform mechanisms may often include additional tasks related to monitoring and evaluation,