EPP Quality Assurance System Plan

2y ago
27 Views
2 Downloads
1.75 MB
24 Pages
Last View : 14d ago
Last Download : 3m ago
Upload by : Sabrina Baez
Transcription

EPP Quality Assurance System PlanSubmitted to the Kentucky Education Professional Standards BoardMarch, 2017 (Updated 5/11/2021)

WKU EPP QASP Page 1INTRODUCTIONAs can be seen at www.wku.edu/cebs/peu/, specifically “Evidence of Teacher Quality – Reports,” WKU asan educator preparation provider (EPP) has a history of collecting, organizing, analyzing, reporting, andreflecting on candidate and progress data at both the EPP and program level. This work has been basedon our belief that highly effective EPPs develop and maintain a quality assurance system that providescredible performance data on the progress and achievement of each candidate available for feedbackand reporting to the candidate, faculty, and program. Such a system allows EPPs to monitor and reportoverall candidate progress toward standards. To that end, almost two decades ago, the WKU EPPdeveloped the WKU Electronic Portfolio and Accountability Systems (E-PASS) in which key EPP-wide andprogram level assessment data are electronically collected, stored, analyzed, and reported. The openingscreen of the system can be viewed at http://edtech2.wku.edu/accountability/.Although the E-PASS system will continue to be the central location of WKU EPP data, with the transitionfrom National Council for Accreditation of Teacher Education (NCATE) to Council for the Accreditation ofEducator Preparation (CAEP), WKU’s challenges have included the following: Moving from a constellation of course-embedded “critical performances” of varying levels of quality,but that assessed all Kentucky Teacher Standards at each major transition point, to identifying ordeveloping a few “key” and “defensible” (in terms of validity and reliability) assessments.Moving from a EPP-wide focus on “helping candidates reach proficiency” to developing“aspirational” assessments that reveal areas for growth in candidates and programs.Re-imagining the assessment results reporting and reflection process at the EPP-wide and programlevel with fewer, key assessments and with the shift from ensuring “everything looks good overall” todigging deeper into data for continuous improvement.This planning document describes WKU’s current progress and continuing journey toward overcomingthese challenges. The plan outline follows the language of “CAEP Standard 5 – Provider QualityAssurance and Continuous Improvement.”QUALITY AND STRATEGIC EVALUATION5.1.1. The EPP quality assurance system is comprised of multiple measures that can monitor A. Candidate ProgressFor initial preparation programs, the WKU EPP has identified ten key assessments, as well as other statemandated criteria to monitor candidate progress (see Table 1). Table 2 indicates how these and otherdata are reviewed at various transition points to make decisions about candidate progress and programquality. The alignment of these key assessments to each initial preparation program is provided in Table3.

v.12032017

WKU EPP QASP Page 2Table 1. Key Assessments – Initial PreparationKEY ASSESSMENTSAREASTANDARD ALIGNMENTNAMEKTSInTASC1Content AssessmentPraxis II(1)*(4,5)2Other Content AssessmentMajor GPA(1)(4)3Assessment of Professional CapabilitiesPraxis PLT(2-10)(1-3,6-10)4Clinical Experiences Measure of TeachingProficiencyStudent Teacher Evaluation1-101-105Measure of Assessment ProficienciesA: Learning Goals & Pre/Post AssessmentB: Analysis of Student Learning1-3,5-71-106Ability to Diagnose and Prescribe for PersonalizedDesign for InstructionStudent Learning1,2,5,61,4-107Application of Content Knowledge andPedagogical SkillsTeacher Work Sample1-3,5-7,91-108Assessment of Literacy OutcomesOperational Stance ConcerningContentArea and nsDispositions FormNANAKTS Exit Survey1-101-1010 KTS Exit SurveyTable 2. EPP-Wide Continuous Assessment Matrix – Initial Preparationv.12032017

WKU EPP QASP Page 3WKU EPP-WIDE CONTINUOUS ASSESSMENT MATRIX - INITIAL PREPARATIONKTS 1 Content KnowledgeKTS 2 Designs/PlansKTS 3 Learning ClimateKTS 4 Impl/ManagesKTS 5 Assessment/EvalKTS 6 TechnologyKTS 7 ReflectionComponent 3:Early itions/KFETSST EVAL1a-d,Overall2a-e,Overall3a-e,OverallExit SurveyPraxis IIEPSB SurveyKTIP DataLGA 4, DI 2,4, ASL 41a-d1a-d1a-dCF 1-3, LGA 1,3,4,7,DI 1,4,5, ASL 32a-e2a-e2a-eCF 1-3, LGA -d,Overall7a-c,Overall4a-eLGA 6,8,9, DI 1,5,ASL 25a-eDI 3, ASL 16a-d6a-d6a-dASL 0a-d8a-b,Overall9a-c,KTS 9 Professional DevOverallR 1-310a,KTS 10 LeadershipOverallDisp a-f*Disp a-lField/Clinical ExperiencesKFETSDiversityKFETSComponent 5: Exitand Follow Up DataTWSOverallKTS 8 CollaborationDispositionsComponent 4: FinalKey AssessmentsState Approved Certification ExamsKYREQ'sVarious Data Required by State for Admission into EPPsFacultyRecsComponent 2:Mid-Level KeyAligned to KTS/InTASCStandards/ValuesComponent 1:AdmissionDisp a-lDisp gCF 1-3, LGA 2,5v.12032017

WKU EPP QASP Page 4LGA 1-9, ASL 1-4, R1-2Impacts P-12 LearningDATA MAINTAINED BY:DATA REPORTING CYCLE:DATA REVIEWED BY:OTS†FacultyFaculty/OTSEd TechEd TechEd s/PECPrograms/PECPrograms/PEC2: Admission to3: Program Exit4: Program Impact (CAEP 4)Final Clinical Experience*Cells reflect instruments or rubric/survey items keyed to CF Standards/Values; †OTS Office of Teacher Services; PEC Professional Education CouncilTRANSITION POINTS:1: ProgramAdmissionTable 3. EPP-Wide Initial Preparation Key Assessments – Location Within SARTPraxis IIMLANG5-12MUSPEAGEDBMEFCSSPEDIECEPraxis ReportMajor GPAPrior to Student TeachingPraxis PLTPraxis ReportSEC 490SEC 490 EDU 490ELED 490SEC 490SEC 490A: Learning Goals &Pre/Post AssessmentB: Analysis of StudentLearningELED 465 MGE 475SMED 320 SEC 475 SMED 320 SEC 481 ART 413 MLNG 474 MUS 412ELED 405 MGE 481Design for InstructionELED 465Teacher Work SampleEDU 489 EDU 489 SMED 489 EDU 489 SMED 489 EDU 489 EDU 489EDU 489EDU 489 EDU 489 EDU 489 EDU 489 EDU 489 EDU 489 EDU 489Content-Area andDiscipline-Specific LiteraciesLTCY 420 LTCY 421LTCY 421 LTCY 421 LTCY 421 LTCY 421 LTCY 421LTCY 421LTCY 421 LTCY 421 LTCY 421 LTCY 421 LTCY 421 LTCY 420 LTCY 310Dispositions Form*ELED 345Block IBlock IIEDU 490ART 411SMED 102SMED 102ART 413SMED 320 SEC 350 SMED 320 SEC 350ART 432SMED 470 EDU 490 SMED 470 EDU 490ELED 490MGE 490SEC 490SEC 490SEC 351SEC 490PETE 315MUS 412 PETE 416 AGED 470SEC 490 ELED 490 SEC 490SEC 490KTS Exit SurveyEDU 489 EDU 489 SMED 489 EDU 489 SMED 489 EDU 489 EDU 489EDU 489EDU 489 EDU 489 EDU 489 EDU 489 EDU 489 EDU 489 EDU 489MGE 385MGE 475MGE 481MGE 490EDU 490MGE 490SPED 490 IECE 490SEC 490EDU 490 MGE 490 MGE 490MGE 475SMED 470 SEC 475 SMED 470 SEC 481 ART 411 MLNG 474MGE 481SEC 490ELED 490SEC 490SEC 490Student Teacher EvaluationSEC 478 AGED 471 SEC 473 FACS 381 SPED 350 IECE 322MUS 415orPETE 416 AGED 479 SEC 473 FACS 481 SPED 425 IECE 422MUS 416FACS 282SEC 351FACS 381IECE 321SEC 352SPED 480FACS 481IECE 422SEC 473SPED 490MGE 490IECE 490EDU 490SEC 490v.12032017

WKU EPP QASP Page 5*At the Admissions stage, the WKU EPP collects Dispositions observed early in programs (Level 1) as part of the Faculty Recommendation process. Level 1 dispositions are ValuesLearning (attendance, class participation, and class preparation) and Values Personal Integrity (emotional control and ethical behavior). The courses listed above are where both Level 1and Level 2 Dispositions are collected, typically, as students engage in field experiences. Level 2 dispositions are Values Diversity, Values Collaboration, and Values Professionalism(respect for school rules, policies, and norms; commitment to self-reflection and growth; professional development and involvement; and professional responsibility).Table 3. EPP-Wide Initial Preparation Key Assessments – Location Within Programs - ContinuedMATNAMEGSKYTeachPraxis IIIECEMGE/SECSPEDPraxis ReportMajor GPAPrior to Student TeachingPraxis PLTPraxis ReportSPED 590Student Teacher EvaluationSMED 589IECE 524EDU 589A: Learning Goals & Pre/Post Assessment B:Analysis of Student LearningSMED 510SPED 523EDU 570SPED 530Design for InstructionSMED 520SPED 523EDU 522SPED 533Teacher Work SampleSMED 589IECE 524EDU 589SPED 590Content-Area and Discipline-Specific LiteraciesSMED 530SLP 517LTCY 510SPED 531v.12032017

WKU EPP QASP Page 6Dispositions Form*SMED 520,SMED 589IECE 520,IECE 523,IECE 523,IECE 524EDU 520,EDU 522,EDU 589SPED 531,SPED 590KTS Exit SurveySMED 589IECE 524EDU 589SPED 590v.12032017

7WKU EPP QASP PageB. Completer AchievementsRelated to CAEP Standard 4, Table 1, Transition Point 4: Program Impact (CAEP 4), outlines continuedWKU efforts to collect available state-level data to measure the overall preparation of our graduates, aswell as their initial impact on P-12 student learning. Also see information under section 5.4 below.C. Provider Operational EffectivenessSee the “Continuous Improvement” section of this document for information regarding how keyassessment and other data will be gathered and analyzed for operational effectiveness.D. Advanced Education ProgramsWhile the foregoing describes the Quality Assurance process for initial educator preparation programs,WKU’s advanced educator credentialling programs also follow similar processes. Key assessments,aligned to all relevant standards, and how those assessment data are used for continuing programimprovement, are described in program review documents (PRD’s) posted on WKU’s Advanced ProgramReview Webpage: https://www.wku.edu/cebs/peu/advanced program review/5.1.2. Evidence demonstrates that the provider satisfies all CAEP standards.To demonstrate that it satisfies all CAEP standards, the WKU EPP searched out and reviewed allpotential assessments or other artifacts related to educator preparation. Table 3 represents first effortsto identify the best sources of evidence by standard at the initial preparation level. As advancedprograms prepare for CAEP, each of these will develop a similar table.5.2.The provider’s quality assurance system relies on relevant, verifiable, representative,cumulative and actionable measures, and produces empirical evidence that interpretations ofdata are valid and consistent.WKU uses consistent and well defined procedures in the development, implementation, and theinterpretation of the assessments used to provide evidence of candidate performance and programquality. Appendix A: WKU Quality Assurance Diagram depicts the discrete steps outlined in thenarrative below.A. EPP Steps to Establishing ValidityWKU believes validity is a single, unitary concept rather than several separate types of validity based onuse and situation. Validity is a characteristic of the assessment scores and the meanings and inferencesdeveloped from these scores rather than an inherent characteristic of the instrument. The processWKU uses will build our case for validity from more than one category of evidence, including Content,Construct, Concurrent, and Predictive evidence. Inferences made from EPP assessments are madev.12032017

8stronger by the validity process and provide a higher level of confidence when determining the meaningof the data. The validation process will be “an integrated [on-going] evaluative judgment of the degreeto which empirical evidence and theoretical rationale support the adequacy and appropriateness ofinferences and actions based on” the assessment outcomes (Messick, 1989, p. 13).1. Research/Theoretical BaseThe development/revalidation of any assessment will include the evaluation of current research andtheoretical bases available on the topic. A short summary of previous research in the assessment areaand rationale for further study will be developed.v.12032017

WKU EPP QASP Page 7Table 4. CAEP Evidence Alignment Matrix – Initial Preparation Programs*This table will continue to be updated as WKU develops the CAEP SSR and modifies annual reporting procedures.

v.12032017

11WKU EPP QASP Page2. Development, Piloting, and RefinementThe development/revalidation of the assessment will include university faculty, clinical faculty, andother key P-12 partners. Appropriate development strategies may include surveys, focus groups, andexpert review. Documentation of this step will include the refinements made during the developmentprocess, piloting of the instrument, and plans for full implementation.Other items that will be included in the development process are (detailed in later steps): the administration and purpose of the assessmentpoint or points of administrationuse in the candidate monitoring or decisions on progressionscoring items are tagged to CAEP, InTASC, and KTS standardsspecific instructions for studentsthe use in candidate monitoring or decision making processcomplete scoring rubric including criterion for success or what is “good enough”3. Assessment Use and TrainingThe description of assessment use will include the groups who use the assessment (e.g., all initialpreparation programs, program areas, licensure areas, etc.) and candidate groups. Specific details willdescribe the scorers’ training process (initial training or re-calibration) and training strategies (videos,Blackboard course, sample assessments, etc.).4. Integration into CurriculumThe description of integration into the curriculum will include the specific point or points when theassessment is administered (beginning, middle, end, etc.), the number of implementations (single ormultiple), and the assessment scorers. This may include specific courses or candidate progress times(admission, clinical experience, etc.). Tables 1 and 2 illustrate how assessments and other key data aremanaged within the program and curriculum.5. Type of Validity EvidenceAssessments developed by WKU will provide at least content related evidence of validity; efforts will bemade to also include either concurrent or predictive evidence. The description of any assessmentdevelopment will include the type of validity evidence under investigation or established and the stepsthat were taken during the process.v.12032017

WKU EPP QASP Page 12Content-related or Construct-related Evidence of ValidityContent/construct-related evidence of validity will be explored using content experts, which includeuniversity faculty, university supervisors, and P-12 teachers and administrators. These experts will begiven the evaluation instruments and rubrics and will be asked to rate each item of the instrumentsusing various criteria, as appropriate, such as frequency of the teaching behaviors in actual jobperformance, the criticality (or importance) of those behaviors, the authenticity (or realism) of the tasksto actual classroom practice, and/or the degree to which the tasks were representative of the targetedWKU EPP QASP Pagestandards (see Crocker, 1997; Denner, Norman, Salzman, Pankratz, & Evans, 2004). Rubrics will beevaluated for percentage of exact agreement and adjacent agreement for each rubric item. A ratio ofcontent/construct-related evidence of validity will be calculated using the following formula: CVR [(E –(N/2)]/(N/2), where N stands for the total number of experts and E stands for the number who ratedthe object as meeting the criteria (frequency, criticality, etc.) of interest (Chepko, 2016).Concurrent-related Evidence of ValidityConcurrent validity refers to the relationship or correlation of scores between two or more assessmentsgiven during the same time (Slavin, 2007). As WKU gathers evidence related to key assessments,concurrent validity would be established by looking to other data running parallel to each assessment.For example, analysis of Key Assessment 5a (Learning Goals & Pre/Post Assessment) and 5b (Analysis ofStudent Learning) may be explored to establish the degree of relationship between the twoassessments.Predictive-related Evidence of ValidityPredictive validity is like concurrent validity but differs in that early key assessment data are analyzedregarding their relationship to a key assessment that occurs at a future time. For example, analysis ofKey Assessment 5a: Learning Goals & Pre/Post Assessment and 5b: Analysis of Student Learning may beexplored to establish the degree of relationship and ability to predict performance on Key Assessment7: Teacher Work Sample.6. Results Analysis and InterpretationSee the “Continuous Improvement” section of this document for information regarding how keyassessment and other data will be gathered and analyzed for EPP and program improvement.B. EPP Steps to Establishing Reliability1. Types of Reliability Evidencev.12032017

Reliability refers to the ability of an assessment to measure candidate characteristics or knowledgeconsistently. There are many methods used to compute the reliability of an assessment:13Internal Consistency – the degree to which assessment items correlate to one another. Test-retest– an estimate of reliability computed by correlating scores of the same group but administered atdifferent times.Parallel Forms – an estimate of reliability computed by correlating scores of the same group butadministered through different forms of the assessment (both designed to measure the sameconstructs).Inter-rater – the degree to which two or more raters obtain the same results when using the sameinstrument/criteria for evaluation. This is the primary method WKU will use to measure the reliability ofits assessments as it addresses the consistency of the assessment implementation methods.2. Scorer TrainingScoring assessments requires professional judgement and will be carried out by those considered to bequalified to make those judgements. Multiple raters help achieve the sound judgment necessary whenreviewing assessments that may be considered “high stakes.” Raters will include representatives fromdifferent groups who may be course instructors, university supervisors, cooperating teachers, schooladministrators, or faculty members from other colleges or content areas.Scorer training will include a review of the assessment and a general set of scorer guidelines. Anti-biastraining will be included as part of this process. Raters will be given complete explanation of theperformance expectations, standards, directions, and prompts given to the candidates. As they becomeavailable, benchmark performances that represent different proficiency levels will be given to raters astraining and calibration tools. Raters will score one or more performances to help identify any scoringdifficulties or variances in judgment. Individual scores can be then compared to the benchmark scores(Denner et al., 2004).Scorer training will be documented and any data analysis done during the process will be included asevidence of establishing/re-establishing reliability. Training for existing assessments will occur at leastonce a year, typically in August. Other training opportunities may need to occur at other times basedon need (new faculty, adjuncts, etc.).3. Multiple ScoringNew assessments will be evaluated for inter-rater reliability after the initial pilot of the instrument. Atthe end of the pilot, qualified raters will conduct a scoring session, which will establish the baseline forrater agreement. Depending on the size of the pilot, this could be done for all items or may be brokenup into smaller scoring groups. At least two raters will rate each group and record scores for allindicator items. These data will be turned in for analysis.Confirmation of inter-rater reliability will be conducted each year for all continuing key assessments.There will be an established time where the qualified raters can be brought together to evaluate thecurrent semester/year data. A representative sampling of student work will be used for thisv.12032017

WKU EPP QASP Page 14verification. Each student’s work will already have an existing instructor score which will not berevealed to the additional scorers. Each sample of work will then be scored by different raters and thescores recorded. Data analysis will produce a current inter-rater score that can be compared toprevious scoring efforts.4. Reliability CoefficientAlthough CAEP does not require EPP’s to produce a reliability coefficient, WKU will be able to providethis information based on the original student score and the scores determined in the multiple scoringsessions. The percentage of agreement will be computed for each pair of ratings by counting thenumber of times the number of exact rater agreement by the number of ratings which is based on asimilar process used by the EPSB KTIP research (Hibpshman, 2017).v.12032017

WKU EPP QASP Page 15CONTINUOUS IMPROVEMENT5.3.The provider regularly and systematically assesses performance against its goals and relevantstandards, tracks results over time, tests innovations and the effects of selection criteria onsubsequent progress and completion, and uses results to improve program elements andprocesses.A. Assesses Performance Against Goals and Relevant StandardsThe WKU EPP continues to believe that highly effective education preparation programs develop andmaintain an assessment system that provides credible performance data on the progress andachievement of each candidate available for feedback and reporting to the candidate, faculty, andprogram. Such a system allows us to monitor and report overall candidate progress toward standards.Key assessment data, including dispositions, teacher work samples, student-teaching evaluations, as wellas fieldwork, survey results, and program impact are reported annually to the EPP and programs via anEPP-Wide assessment report developed by the College of Education and Behavioral Sciences (CEBS)Office of the Dean and presented to the Professional Education Council (PEC). This report typicallyincludes the following types of information (see Table 1 for reference):CAEP 3.1: Admission Data Number, percentage, and diversity program of educator preparation candidates approved by the PECfor admissionCAEP 3.2 Admission DataAdmission test score averages and average GPA by program of educator preparation candidates byprogramCAEP 3.3 Non-academic Dispositions Data Disposition average scores prior to student teaching and during student teaching by programCAEP 3.4 Candidate Progression/MonitoringMid-Level Key Assessment Data Percentage of candidates scoring at each level of proficiency on all key assessments at the indicatorlevel and by appropriate program standards Identification of candidates failing to make progressFinal Key Assessment DataTeacher Work Sample scores by program, by components, byindicators and appropriate program standards Student Teaching Evaluation data by program, by components, by indicators and appropriateprogram standardsExit and Follow Up Data WKU Exit Survey resultsCAEP 3.5 Candidate standard for content knowledge Major GPA Prior to Student Teaching by program and education vs non-education studentsPraxis resultsCAEP 5.4 Measures of completer impact Teacher Preparation Program Impact Report KTIP Data v.12032017

WKU EPP QASP Page 16 Education Professional Standards Board (EPSB) New Teacher Survey resultsIn section 1 of the report, results are reported by data collection point. In section 2, data are summarizedbased on what they reveal about candidate proficiencies on Kentucky Teacher and InTASC standards, aswell as on other important measures such as dispositions and Praxis tests. Section 3 summarizes currentand planned efforts to report and disseminate these results. Section 4 summarizes key decisions made orunder consideration based on these results.This report as well as other data deemed important to the unit and programs are initially disseminatedthrough the PEC. The PEC, consisting of faculty representatives from all education professionalpreparation programs, meets monthly to admit teacher candidates into the professional educationprogram, to approve education-related program changes, to discuss state and national education trends,to recommend changes to the functioning of the unit, and to review, discuss, and make decisions basedon key assessment and other education-related data. Report data will then be shared with the GreenRiver Regional Educational Cooperate Superintendents (consisting of 43 area school districts served byWKU), the CEBS Advisory Board, and KCTCS Dual Admission representatives.At the program level, designated program coordinators work with the appropriate member of the CEBSDean’s Office to develop a program-level annual assessment report composed of the following outline:1. Presentation of continuous assessment results in the following areas:a. Admission Datab. Mid-level Key Assessment Datac. Early Clinical Experiences Data – Including dispositions assessment and KFETS compliance reportingd. Final Key Assessment Datae. Exit, Follow Up and Program Impact Data2. Summary of results by Kentucky Teacher/InTASC (Initial Programs) or Program Standards (AdvancedPrograms) – Including a description of what results suggest about candidates’ progress toward/proficiencyon each standard3. Summary of efforts to report and disseminate results (EPP/college-wide meetings, department/programlevel meetings, written reports, presentations, etc.)4. Summary of key discussions and/or decisions made based on assessment results:a. Description of any assessment or data collection changes made/to be made based on assessmentresultsb. Description of any program curriculum or experience changes made/to be made based onassessment resultsc. Description of any decisions about group/individual student progress made/to be made based onassessment results5. Discussion of trends in assessment results over several assessment cyclesB. Tracks Results Over TimeSee “Discussion of trends in assessment results over several assessment cycles” above.v.12032017

WKU EPP QASP Page 17C. Tests Innovations and the Effects of Selection Criteria on Subsequent Progress and CompletionFor key assessments, candidates receiving a holistic score of “1” (on a scale of 1 Beginning, 2 Developing, 3 Proficient, and 4 Exemplary) will be required to repeat the assessment until successful(scoring at least “2”) or will be advised out of the program. Candidates scoring at least “2” will be allowed tocontinue into the next stage of the program. Behind these holistic scores are analytic standard alignedrubrics. The greater quantity and potential variability of scores should allow for longitudinal studies ofcandidate progress from early to final key assessments as well as performance on Praxis tests and KTIPassessments. Such studies would then provide sufficient evidence to begin using early candidateperformance as selection criteria, which then would lead to opportunities to test the effects of implementingthese criteria on subsequent candidate performance and completion.D. Uses Results to Improve Program Elements and ProcessesSee the information provided under 5.3 A-C.5.4.Measures of completer impact, including available outcome data on P-12 student growth, aresummarized, externally benchmarked, analyzed, shared widely, and acted upon in decisionmakingrelated to programs, resource allocation, and future direction.A. Current Context in KentuckyWKU and other Kentucky institutions have worked in conjunction with the Kentucky EducationProfessional Standards Board and other Kentucky education agencies to collect and report on data relatedto the following eights areas listed below.Table 8. CAEP Annual Reporting MeasuresMeasure DescriptionPossible WKU/Kentucky-wide InstrumentsProgram Impact Measure #1: Impact thatcompleters’ teaching has on P-12 learning anddevelopment KCEWS Educator Preparation PGES ReportProgram Impact Measure #2: Indicators ofteaching effectiveness KCEWS Educator Preparation PGES Report KTIP data from EPSBProgram Impact Measure #3: Results of employer Kentucky Teacher Preparation Feedback Reportsurveys, and including retention and employment EPSB New Teacher SurveymilestonesProgram Impact Measure #4: Results of completer EPSB New Teacher SurveysurveysProgram Outcome/Consumer InformationMeasure #1: Graduation rates from preparationprograms EPSB Candidate Cohort Data in new annualProgram Approval process - Kentucky EducatorPreparation Accountability System (KEPAS)v.12032017

CAEP EIGHT ANNUAL REPORTING MEASURESWKU EPP QASP Page 18Program Outcome/Consumer InformationMeasure #2: Ability of completers to meetlicensing (certification) and any additional staterequirements (i.e., licensure rates)Program Outcome/Consumer InformationMeasure #3: Ability of completers to be hired ineducation positions for which they were prepared(i.e., hiring rates) KCEWS Educator Preparation Feedback ReportProgram Outcome/Consumer InformationMeasure #4: Student loan default rates and otherconsumer information5.5. Same as Praxis Content /PLT Exam Results innew annual Program Approval process - KEPAS Information provided by WKU InstitutionalResearchThe provider assures that appropriate stakeholders, including alumni, employers, practitioners,school and community partners, and others defined by the provider, are involved in programevaluation, improvement, and identification of models of excellence.As stated previously, the WKU EPP believes highly effective education preparation programs develop andmaintain an assessment system that provides credible performance data on the progress andachievement of each candidate available for feedback and reporting to the candidate, faculty, andprogram. The EPP’s system processes include stakeholder involvement at all steps in the assessmentcycle. P-12 representatives were and will continue to be integral in the creation/scoring/evaluation ofEPP-wide assessments. Partners including GRREC Superintendents, CEBS Advisory Board, and KCTCSrepresentatives will be given opportunities th

quality. Appendix A: WKU Quality Assurance Diagram depicts the discrete steps outlined in the narrative below. A. EPP Steps to Establishing Validity WKU believes validity is a single, unitary concept rather than several separate types of validity based on use and situation.

Related Documents:

quality. Appendix A: WKU Quality Assurance Diagram depicts the discrete steps outlined in the narrative below. A. EPP Steps to Establishing Validity WKU believes validity is a single, unitary concept rather than several separate types of validity based on use and situation.

critical issues the University has established a Quality Assurance Directorate, which is mandated to develop a Quality Assurance Framework and a Quality Assurance Policy. The Quality Assurance Framework would clearly spell out the Principles, Guidelines and Procedures for implementing institutional quality assurance processes.

Software Quality Assurance Plan (SQAP) for the SRR-CWDA-2010-00080 H-Area Tank Farm (HTF) Performance Revision 0 Assessment (PA) Probabilistic Model August 2010 Page 5 of 15 1.0 SCOPE This Software Quality Assurance Plan (SQAP) was developed in accordance with the 1Q Quality Assurance Manual, Quality Assurance Procedure (QAP) 20-1, Rev. 11.

PROJECT QUALITY ASSURANCE PLAN Page 7 of 60 Project Quality Assurance Plan Electrical System 3.1 Introduction: The project quality assurance plan hereinafter referred to as the PQAP, describes Al Andalus quality management system for all phases of this project. All the staff

Quality Assurance Representative. The Site Manager will appoint a member of the Project Office to control all Quality Assurance issues including - Assisting the Site Manager to co-ordinate and develop the Quality Assurance Plan. Advise Engineers, General Foremen, Foremen and Chargehands in all matters of Quality Assurance.

Quality Assurance and Improvement Framework Guidance 2 Contents Section 1: Quality Assurance and Improvement Framework 1.1 Overview 1.1.1 Quality Assurance (QA) 1.1.2 Quality Improvement (QI) 1.1.3 Access 1.2 Funding Section 2: Quality Assurance 2.1 General information on indicators 2.1.1 Disease registers 2.1.2 Verification

The total EPP revenue of the Magic Quadrant participants at year-end 2016 was slightly over 3.29 billion, up 2.8% over the previous year. EPP suites continue to grow in functionality. Consequently, some EPP revenue is inflow from other markets. Gartner anticipates that growth will continue to be in

12. Pelayanan PRESIDEN REPUBLIK INDONESIA . BAB II LANDASAN, ASAS, DAN TUJUAN Pasal 2 Pembangunan ketenagakerjaan berlandaskan Pancasila dan Undang Undang Dasar Negara Republik Indonesia Tahun 1945. Pasal 3 Pembangunan ketenagakerjaan diselenggarakan atas asas keterpaduan dengan melalui koordinasi fungsional lintas sektoral pusat dan daerah. Pasal 4 Pembangunan ketenagakerjaan bertujuan .