Journal Or Positive Hchavior Intervention!) Voiume 10 .

2y ago
2 Views
1 Downloads
3.13 MB
14 Pages
Last View : 13d ago
Last Download : 3m ago
Upload by : Cade Thielen
Transcription

Journal or Positive HchaviorIntervention!)Voiume 10 Number iTechnical Adequacy of the FunctionalAssessment Checklist: Teachers and Staff(FACTS) FBA Interview MeasureJanuary 2008 33-45 2008 Hammili inslituteon Disabilities10.ii77/i098300707311619bttp://j pbi.sagepub.comiiosted atiitlp://online.sagepub.comKent MclntoshUniversity of British Columbia, Vancouver, CanadaChris BorgmeierPortland State University, OregonCynthia M. AndersonRobert H. HomerBillie Jo RodriguezTary J. TobinUniversity of Oregon, EugeneWith the recent increase in the use of functional behavior assessment (FBA) in school settings, there has been an emphasisin practice on the development and use of effective, efficient methods of conducting FBAs, particularly indirect assessmenttools such as interviews. There are both benefits and drawbacks to these tools, and their technical adequacy is oftenunknown. This article presents a framework for assessing the measurement properties of FBA interview tools and uses thisframework to assess evidence for reliability and validity of one interview tool, the Functional Assessment Checklist:Teachers and Staff (FACTS; March et al., 2000). Results derived from 10 research studies using the FACTS indicate strongevidence of test-retest reliability and interobserver agreement, moderate to strong evidence of convergent validity withdirect observation and functional analysis procedures, strong evidence of treatment utility, and strong evidence of socialvalidity. Results are discussed in terms of future validation research for FBA methods and tools.Keywords:Fbehavioral assessment: positive behavior supports: functional assessment: behavioral assessment: schoolbased assessment: applied behavior analysis: Functional behavioral assessmentunctional behavior assessment (FBA) is a processfor identifying problem behaviors and determiningthe environmental events that predict and maintain them(Crone & Homer, 2003; O'Neill et al., 1997; Sugai,Lewis-Palmer, & Hagan-Burke, 1999-2000). Throughthe FBA process, a team situates the problem behaviorwithin a context and analyzes the environmental variables that affect its occurrence and nonoccurrence. Theprimary outcomes of an FBA are (a) an operational definition of the problem behavior, (b) identification of theantecedent events that reliably predict the occurrenceand nonoccurrence of the behavior, and (c) identificationof a hypothesized consequence maintaining responding(Carr, Langdon, & Yarbrough, 1999; Homer, 1994; Repp& Karsh, 1994; Sugai et al., 2000). A team then uses thisinformation to create an individualized behavior supportplan, which contains specific strategies to teach appropriate skills and to modify the environment to make theproblem behavior irrelevant, inefficient, and ineffective(O'Neill et al., 1997). An effective support plan is designedto decrease problem behaviors, increase prosocial behaviors, and improve quality of life (Sugai, Lewis-Palmer, &Hagan, 1998).FBA is not a new technology for addressing problembehavior; it is rooted in more than 50 years of behavioralresearch (see Carr, 1977; Carr et al., 2002; Dunlap, 2006;Ervin, Ehrhardt, & Poling, 2001; Skinner, 1953; Sugaiet al., 2000). This research supports the value of behavior support plans based on knowledge of the antecedentand consequence events that control problem behavior.Authors' Note: This research was supported in part by U.S. Departmentof Education Grant No. H326S980003. Opinions expressed herein donot necessarily reflect the policy of the Department of Education, andno official endorsement by the department should be inferred. TheFunctional Assessment Checklist: Teachers and Staff is readily available online (at http://www.pbis.org/tools.htm) and in reproducible printform (Crone & Homer, 2003).Editor's Note: The action editor for this article was Donald Kincaid.33

34 Journal of Positive Behavior InterventionsInitial research on FBA involved systematic manipulationof antecedent and consequent stimuli, often in controlledconditions. This process was labeled functional analysisand has a long and tested history (e.g., Carr et al., 1999;Iwata, Dorsey, Slifer, Bauman, & Richman, 1982/1994;Repp & Karsh, 1994; Vollmer, Northup, Ringdahl,LeBlanc, & Chauvin, 1996).In recent years, a growing body of research has focusedon the use of FBA methods in typical school settings(Crone, Hawken, & Bergstrom, 2007; Ervin, Radford,et al., 2001; Scott et al., 2004; Sugai et al, 2000). Thisincreased interest in school-based FBA is most likely dueto two factors. First, the two reauthorizations of theIndividuals with Disabilities Education Act (IDEAAmendments of 1997; Individuals with DisabilitiesEducation Improvement Act of 2004) stipulate that schoolpersonnel must complete FBAs and behavior supportplans when students are at risk for extended suspensionsor changes in placement (Drasgow & Yell, 2001). Second,and perhaps in response to these regulations, there hasbeen a recent influx of research supporting the value ofFBA procedures in informing intervention in general education settings (e.g., Bergstrom, Homer, & Crone, 2007;Ervin, DuPaul, Kern, & Friman, 1998; Hawken & Homer,2003; Keamey & Albano, 2004; March & Homer, 2002;Radford & Ervin, 2002; Todd, Homer, & Sugai, 1999).The most convincing evidence to date comes from researchstudies comparing effects of function-based interventions(indicated from FBA results) with typical school interventions that FBA results indicated would be ineffective(DuPaul, Eckert, & McGoey, 1997; Filter & Homer, inpress; Ingram, Lewis-Palmer, & Sugai, 2005; Keamey,Pursell, & Alvarez, 2001; McKenna, 2006; Newcomer &Lewis, 2004). These studies have documented reductionsin problem behavior with function-based interventions andeither lack of effects or increases in problem behavior withnon-function-based interventions (Mclntosh, Brown, &Borgmeier, in press).As the clinical value of FBA became apparent, effortshave shifted to less rigorous, more efficient strategies foraccurately identifying the functions of problem behavior,including indirect FBA measures (e.g., rating scales andinterviews designed to be used with teachers, familymembers, or other care providers). Such measures shareseveral advantages. First, they are a logical starting pointin the FBA process because they can help to identifyspecific problem behaviors, contexts, and possible maintaining functions (Johnston & Pennypacker, 1993). Suchinformation narrows the scope of the FBA to relevantbehaviors and to routines. Although the information issubject to bias, the respondent is likely to have moreexposure to the problem behavior than could be viewedin hours of observation. This is particularly advantageous in general education settings where conductingextensive direct observations is often difficult becauseproblem behavior often occurs infrequently and, due tothe presence of complex environmental variables, may bedifficult to predict (Homer, Vaughn, Day, & Ard, 1996;Nelson, Roberts, Rutherford, Mathur, & Aaroe, 1999;Radford & Ervin, 2002; Sprague & Horner, 1999). Inaddition, indirect measures are widely used in schoolsbecause they are readily available, straightforward touse, and less time consuming and invasive than directobservation (Conroy, Fox, Bucklin, & Good, 1996;Merrell, 2003).Because of these features, indirect FBA measures haveincreasingly assumed a central role in the practice ofschool-based FBAs. Typically, the FBA process involvesdeveloping a preliminary hypothesis statement (identifyingthe setting event, antecedent, behavior, and maintainingconsequence) through one or more interviews and recordsreviews, then confirming the statement through directobservation when necessary and/or feasible (Sugai et al.,1999-2000). Functional analysis procedures (as describedby Iwata et al., 1994) are rarely used by school personnelbecause of the difficulty of building local expertise in theprocedure and the considerable resources involved; evenproponents of functional analysis as best practice in FBAhave noted that these experimental procedures are oftenunfeasible in typical school settings (Carr, Langdon et al.,1999; Sasso, Conroy, Stichter, & Fox, 2001).FBA Interview MeasuresGiven that FBA interviews are used widely in schoolsand that they rely on inferences about functional relations,they deserve an additional level of scrutiny as behavioralmeasures, particularly in cases in which plans are derivedbased entirely on interview information. Unfortunately, theevidence base for FBA interview measures is not presentlyconvincing. Researchers have described the lack of empirical evidence regarding psychometric properties of indirectFBA measures in general (Cone, 1997; Gresham, 2003;Sasso et al., 2001), and indirect FBA rating scales in particular (Barton-Arwood, Wehby, Gunter, & Lane, 2003;Royd, Phaneuf, & Wilczynski, 2005; Stage, Cheney,Walker, & LaRocque, 2002; Zarcone, Rodgers, Iwata,Rourke, & Dorsey, 1991), but only a few studies examining the properties of FBA interview measures exist in theliterature, each examining different tools.From these studies, a general pattem emerges. Taken asa whole, the varying FBA interview measures show initialevidence of interrater reliability (Kinch, Lewis-Palmer,

Mclntosh et al. / Functional Assessment Checklist: Teachers and StaffHagan-Burke, & Sugai, 2001), moderate evidence of convergent validity between teacher and student interview measures (Kinch et al., 2001; Reed, Thomas, Sprague, &Homer, 1997), and poor evidence of convergent validitybetween student interviews and teacher rating scales(Kwak, Ervin, Anderson, & Austin, 2004). The strongestagreement typically occurs on problem behavior, and theweakest agreement on setting events. Convergent validity with direct observation has been weak, particularlyon maintaining consequences (Kwak et al., 2004). Inaddition, information to establish content validity or treatment utility has not been readily available (Floyd et al.,2005). In sum, increased exploration ofthe technical adequacy of specific FBA interview measures is justified,especially given their continued use by practitioners indeveloping individual behavior support in school settings.Determining the Technical Adequacyof FBA Interview MeasuresAssessing the psychometric properties of an FBA interview measure presents challenges. These challengesinclude the level of inference in identifying "tme" function,situational specificity of behavior, the idiographic nature ofthe FBA process, and characteristics of the interviewprocess. As a result, many researchers have suggested thatsome traditional measurement standards may be less helpful in assessing the validity and reliability of FBA measures (Cone, 1997; Floyd et al., 2005; Gresham, 2003;Hayes, Nelson, & Jarrett, 1986; Shriver, Anderson, &Proctor, 2001). The reasoning for this position comes fromtwo points. First, direct observation of behavior is seen asa low-inference measurement when compared to assessingwithin-child traits and is therefore less subject to measurement error (Johnston & Pennypacker, 1993; Shriver et al.,2001). Second, the ultimate test of any measure is to whatextent it contributes information that is valuable for intervention or treatment utility (Gresham, 2003; Hayes et al.,1986; Kem & Dunlap, 1999; Nelson-Gray, 2003).These points may be valid for more direct FBA measures, such as direct observation, functional analysis, andstructural analysis, but indirect measures deserve closerscrutiny. Although the outcome may be identical, measuring the verbal behavior of an interviewee (i.e., informationfrom informants) may be more prone to error than directlyobserving the problem behavior because of the higher levelof inference required (Gresham, 2003; Gresham & Davis,1988; Johnston & Pennypacker, 1993). If instrumentationerror leads to inaccurate conclusions of maintaining consequence, the resulting intervention could do more harm thangood. As such, determining the technical adequacy ofexisting FBA interview measures is an important task for35the field. The purpose of this article is to illustrate how thetechnical adequacy of indirect measures of FBA mightbe assessed in a systematic way. To better describe theprocess, we used an evaluation ofthe technical adequacy ofa brief FBA interview measure, the Functional AssessmentChecklist: Teachers and Staff (FACTS; March et al., 2000),as an example.MethodEvaluations of the technical adequacy of an assessmentprocedure can be conducted in one of two ways. First,a researcher could set out to conduct a study designedspecifically to evaluate the measure in some way. Forexample, the researcher might compare results obtainedwith the assessment in question to results obtained fi om analready established measure. If a number of studies alreadyhave been conducted using the measure in question, analtemative strategy—and the strategy used here—is to compile existing research and to evaluate the extent to which themeasure is found to be technically adequate across studies.Research Studies Sampled and Review ProcessIf a measure is widely known and used, an appropriatesearch strategy would be to use a literature database (e.g.,PsycINFO) to identify relevant studies. Because the FACTSwas developed at the University of Oregon and, to date, hasbeen used in research primarily by those affiliated with theUniversity of Oregon, we chose instead to contact colleagues and request studies in which the FACTS had beenused and two additional criteria had been met: (a) the mostcurrent version of the FACTS measure (March et al., 2000)was used and (b) either direct observation or functionalanalysis procedures were conducted to confirm the information generated from the FACTS interviews. Nine studies(identified in the tables and reference list) met these criteria,and one study that provided only social validity informationwas also included, bringing the total to 10.All studies were read and coded by at least one of theauthors (the process for evaluating interobserver agreement is described below) to gather information. First, oneauthor coded general characteristics of the study (i.e.,participant demographics, setting of the study) and thehypothesis statement generated from the FACTS measure. Because the actual FACTS protocols used in thestudies were not included, this information was gatheredsimply by coding the hypothesis statement provided intext by the authors.Agreement between the FACTS interview and otherFBA measures was evaluated in two ways. First, for studies in which other interviews and unstructured observationswere used, one author recorded the hypothesis statements

36 Journal of Positive Behavior Interventionsreported by the authors for those measures. This was necessary because the authors did not include the raw data(i.e., sheet on which interview responses were coded, datafrom observations). Second, for studies using functionalanalysis, two raters independently evaluated the functionalanalysis graphs and developed hypothesis statements.Agreement was 100% for all participants across the fivestudies reporting functional analyses. Finally, two ratersindependently evaluated intervention graphs to determinepercentage change from baseline to intervention. Eachrater used a ruler to calculate the mean of the last threebaseline points and the last three intervention points. Thistechnique was selected for two reasons: First, the last threepoints are most likely to be stable and thus reflect minimaleffects of extraneous variables, and second, it was theapproach used in a recent meta-analysis of the effectiveness of function-based support (Carr et al., 1999). Each raterthen independently calculated percentage change. Totalagreement on percentage change was better than 99%.ParticipantsA total of 41 students participated in the reviewedstudies. Participant characteristics are summarized inTable 1. With the exception of 6 participants, all were inpublic elementary schools at the time of the study. Of theremaining 6, 3 were in preschool and 3 were in publicmiddle schools when the study was completed. For allparticipants, FACTS interviews were administered eitherby doctoral students or by school personnel with expertise in FBA and experience using the FACTS. For allparticipants, the FACTS was administered to school personnel (most often the student's regular teacher) who hadregular contact with the child in the challenging contextand were knowledgeable about the behavior being assessed;for Participants 4 through 12, a number of school personnel served as additional informants, with varying levels ofcontact with the student and his or her problem behavior.FBA MeasuresFunctional Assessment Checklist for Teachers and StaffThe FACTS (March et al., 2000) is a semistmcturedFBA interview measure designed for use in schools withteachers or other school staff as informants. The form wasdeveloped by adapting and streamlining the FunctionalAssessment Interview Form (O'Neill et al., 1997) for useby teams completing FBAs in typical school settings. TheFACTS requires 10 to 25 min to complete, with the knowledge level of the informant and complexity of the behavior serving as the determining factors. The form itselfconsists of two segments. Parts A and B. In Part A, therespondent identifies problem behaviors and completes aroutines analysis, identifying the student's daily scheduleof activities and determining which of them are most andleast associated with occurrence of the problem behaviors. Part B focuses on a specific problem behaviorroutine identified in Part A. The interviewer asks therespondent to identify an operational definition of theproblem behavior and its setting events, immediateantecedents, and maintaining functions. If multiple problem behavior routines are identified, a separate Part B iscompleted for each. The outcome of the FACTS is one ormore behavioral hypothesis statements as described in theprevious section.Other MeasuresEach of the included studies reported the results ofother FBA measures to confirm or disconfirm FACTSbehavioral hypothesis statements (see Table 2). Theadditional measures used with participants were studentguided interviews (15%), direct observation procedures(59%), functional analysis procedures (63%), and officediscipline referrals (7%). In addition, the researchersused the hypothesis statements generated from FACTSinterviews to design behavior support plans for 47% ofthe participants. Because of the relatively low number ofstudies using student-guided interviews and office discipline referral reviews, agreement with these measureswas not included in our analyses.Direct ObservationDirect observation was used to collect FBA informationin a number of studies reviewed. Across studies, the settingfor direct observations was identified based on the FACTSinterview; observations were conducted in settings whereproblem behavior was reported to be most frequent. A variety of direct observation forms were used, including standard A-B-C observation forms (Bijou, Peterson, & Ault,1968) and the Functional Assessment Observation form(O'Neill et al., 1997). Each of the forms allowed for eventbased measurement of problem behavior, as well as identification of the antecedents (e.g., presentation of specifictasks, periods of independent work) and consequences (e.g.,removal of tasks, teacher attention) associated with theproblem behavior. After observations were completed, datawere analyzed to identify frequently occurring antecedentsand consequences for problem behavior. For aU studies,authors did not report the raw data gleaned fi-om directobservations but instead reported a hypothesis statement,including problem behavior, antecedents, and consequences, which was compared to the hypothesis statementsgenerated from FACTS interviews.

Mclntosh et al. / Functional Assessment Checklist: Teachers and Staff37Table 1Demographic Information of Participants Included in AnalysesStudyBergstrom (2003)Borgmeier &Homer (2006)Filter & Homer (in press)March & Homer (2002)McKenna (2006)Preciado (2006)Carter & Homer (2007)Salentine (2003)Schindler &Homer (2005)ParticipantGradeGenderSpecial Education classification123322MFMNoneNoneLeaming disabilityTalking outTalking out; out of seatTalking out; out of seat; inappropriateuse of MMMNoneCognitive impairmentEmotional disturbanceNoneAutismCognitive impairmentAutism (ADHD)Other health impainnent(ADHD)Other health impairment(ADHD)Leaming disabilityNoneLearning disabilityNone177MEligible (not FMMMMMFMFMMMNoneEligible (not ning NSNSNSNSNSNSNSNSNSNSTalking out; out of seat; banging objectsOut of seat; work refusal; disruptiveOut of seat; work refusal; disruptiveOut of seat; work refusal, dismptiveWork refusal; off taskWork refusal; disruptive; out of seatOff task; inappropriate use of objectsOut of seat; work refusal; disruptiveOut of seat; work refusal; dismptiveTalking out; out of seat; poking peersTalking out; out of seatTalking out; out of seatTalking out; work refusal; inappropriatenoises; wanderingTalking out; throwing items; elopement;aggressionDefiance; insubordinationDisruption; aggressionOff taskTalking out; out of seatTalking out; not following directionsTalking out; out of seat; off taskTalking out; out of seat; off taskTalking out; off taskNot following directionsNot following directionsNot following directions; out of seatTalking out; disruptive; verbal aggressionTalking out; out of seat; gTantrumsTarget BehaviorNote: NS not specified. ADHD (attention-deficit/hyperactivity disorder) is not a special education classification but is added in parentheseswhen a medical diagnosis was available.Functional AnalysisExperimental functional analysis is a procedure inwhich antecedent and/or consequence variables aremanipulated systematically using a single-subject design.This allows for causal statements to be made about therelation between environmental variables and problembehavior (Iwata et al., 1982/1994; O'Neill et al., 1997).

38Journal of Positive Behavior InterventionsTable 2Measures Used to Confirm FACTS InformationMethod of Functional AssessmentStudyBergstrom (2003)Borgmeier & Homer (2006)Participant12345Filter & Homer (in press)March & Homer (2002)McKenna (2006)Preciado (2006)Carter & Homer (2007)Salentine (2003)Schindler & Homer wBehavior SupportPlan (Based onFACTS ——————————XXXNote: FACTS Functional Assessment Checklist: Teachers and Staff (March et al., 2000).In contrast to analog functional analysis (Iwata et al.,1982/1994), in which three or four predetermined hypotheses (e.g., "Prohlem behavior is evoked by adult attentiondeprivation and maintained by attention delivery") aretested with all participants, the studies reviewed here usedconfirmatory functional analysis procedures (O'Neill et al.,1997), in which specific hypothesis statements derivedfrom another FBA measure (in this case, the FACTS) aretested against an altemative hypothesis statement and acontrol condition (described more fully in Borgmeier &Homer, 2006). These procedures are often completed innaturalistic settings, with manipulation of the specificenvironmental variables that are hypothesized to controlthe behavior (e.g., requests to read grade-level passages

Mclntosh et al. / Functional Assessment Checklist: Teachers and Staff 39Table 3Criteria for Determining Reliability and Validity of FBA Interview MeasuresGeneral AreaSpecific Area of MeasurementCriteriaReliabilityTest-retest reliabilityInterrater reliabilityDo interviews given at different times generate the same hypothesis statement(s)?Do interviews with different informants (regarding the same context) generate thesame hypothesis statement(s)?Do different interviewers generate the same hypothesis statement?Do the questions in the interview measure adequately reflect the research base?Does the interview format itself produce valid results?Are the hypothesis statements generated consistent with other (particularly directand/or experimental) FBA procedures?Are behavior support plans based on interview hypothesis statements related withmeaningful reductions in problem behavior?Is the interview process viewed by informants as efficient and effective?ValidityInterobserver agreementContent validityProcess validityConvergent validityTreatment utilitySocial validityNote: Based on information from Floyd et al. (2005) and Shriver, Anderson, and Proctor (2001). FBA functional behavior assessment.instead of generic requests). This increased specificity maylead to increased precision in detennining the maintaining consequence of behavior (Homer, 1994). As with analog functional analysis, the conditions are presented inrandom order and occurrences of prohlem behavior arerecorded to determine differences in levels of prohlembehavior for each condition.Measurement CriteriaWe adopted criteria estahlished hy Royd et al. (2005) todetermine overall measurement properties of the FACTSmeasure. The researchers adapted criteria from multiplesources, including the Standards for Educational andPsychological Testing (American Educational ResearchAssociation, American Psychological Association, &National Council on Measurement in Education, 1999), togenerate a set of criteria relevant to the specifics of FBAmeasures. In accordance with their recommendations forinterview measures, we did not include internal structureas a criterion. The resulting areas of measurement were(a) test-retest reliahility, (b) interrater reliability, (c) interohserver agreement, (d) content validity, (e) process validity, (f) convergent validity, (g) treatment utility, and (h)social validity. The criteria are described in Table 3.ResultsReliabilityOne study to date (Borgmeier, 2003; Borgmeier &Homer, 2006) has examined the FACTS on a scale thatallows for examination of its reliability. This studyinvolved FACTS interviews from 63 informants for ninetarget students. The following information about reliability is reported from that study.Table 4Test-Retest Reliability Estimatesfor tbe FACTS MeasureSetting events.62(8/13)AntecedentsFunctionsTotal statement.77(10/13).92(12/13).11(30/39)Note: From Borgmeier (2003). FACTS Functional AssessmentChecklist: Teachers and 5to#(March et al., 2000).Test-Retest ReliabilityTo evaluate test-retest reliability, Borgmeier (2003)administered FACTS interviews twice (5 to 7 days apart)for 13 of the 63 informants (21%). Test-retest agreementwas calculated through point-to-point comparison of thesummary statements generated through the FACTS.Agreement was calculated by dividing the number of agreements per section of the FACTS behavioral hypothesisstatement (e.g., setting event, antecedent or maintainingconsequence) by agreements plus disagreements. In addition, total FACTS agreement was calculated for the hypothesis statements from each FACTS administration bycounting the total number of agreements across all sections.Table 4 provides test-retest reliability estimates accordingto specific items in the hypothesis statements generatedas well as the entire statements. Results showed strongtest-retest reliability levels for antecedents, functions, andtotal statements and moderate levels for setting events.Among the nine responses for which there was disagreement in the test and retest results, the mean level of contactwith the student in the specific routine being assessed was2.2 (n 9), a rating indicating less than 1 hour per weekwith the student in that routine. When reporting maintainingfunctions, the one informant without agreement across

40 Journal of Positive Behavior Interventionsadministrations did not have contact with the student in thespecific routine being assessed.Interrater ReliabilityFor each student, Borgmeier (2003) completed FACTSinterviews with five to eight staff members with varyinglevels of exposure to the students (in terms of settingsand experience with the problem behavior) and knowledge about behavioral theory. Interrater reliability wasdetermined by comparing agreement on maintaining consequence across all informants. Results showed moderate agreement among 5 to 8 informants in varied settings,ranging from .50 to .88.Interobserver AgreementInterobserver agreement information for the FACTSmeasure was collected with 9 of 63 participants (14%).To calculate agreement, both the interviewer and anobserver completed a FACTS protocol for the sameinterview. There was 100% agreement across all items ofthe summary of behavioral hypothesis statements for allnine interviews.Content ValidityFor an FBA measure, content validity refers to theextent to which the measure reflects empirical literatureon functional relations. Like other FBA interview measures, the FACTS measure generates a behavioral hypothesis statement, identifying an operational definition ofproblem behavior and environmental events that evoke it(discriminative stimuli, establishing operations), maintainit (consequences), and momentarily change the value ofconsequences (setting events). Such infomiation hasstrong evidence for validity; this evidence is based onmore than 50 years of research documenting a functionalrelation between behavior and environmental events thatprecede and follow them (Carr et al., 1999; Hayes et al.,1986; Shriver et al., 2001; Skinner, 1953).The routines analysis item is a unique feature of theFACTS that is designed to produ

Borgmeier, in press). As the clinical value of FBA became apparent, efforts have shifted to less rigorous, more efficient strategies for accurately identifying the functions of problem behavior, including indirect FBA measures (e.g., rating scales and interviews designed to be used

Related Documents:

Subsea Intervention Type 1 (Class A) - Light (riserless) Intervention Type 2 (Class B) - Medium Intervention Type 3 (Class C) -Heavy Intervention Well and manifold installation Maintenance -scale squeeze -chemical injection Increasing demand for mature fields Image Ref - Subsea Well Intervention Vessel and Systems

Positive Psychology, Positive Psychology Parenting, Authentic Happiness Model, Positive Parenting, Positive Discipline 1. Introduction Every single day, about one million adults become parents for first time (Bornstein How to cite this paper: Kyriazos, T. A., & Stalikas, A. (2018). Positive Parenting or Positive Psychology Parenting? Towards a .

† Positive Pay / Reverse Positive Pay Decision Pending Approval: An email is sent when a positive pay/reverse positive pay decision is ready to be approved. † Positive Pay No Suspect

The Academic or Behavioral Intervention Plan 105 . Step 6: Implementing the Intervention Plan and Process 107 . and Intervention Team (SPRINT) Consultation 325 . Referral Audit Form . Appendix III: The Project ACHIEVE/SPRINT Team End-of-the-Year Get-Go/Student 327 . Results of the Curriculum and Intervention Resource Survey Completed by the .

Limitations of Brief Intervention vs. Full Intervention Agenda of the family/team gets addressed as primary agenda vs. agenda of Interventionist. Most families call for an intervention when they are in partial or full crisis mode and want immediate intervention. Interventionist joins with family in expediting the process which may lessen the desired outcomes for family

5 17 SIPPS Intervention Programs 18 SIPPS Intervention K-3 Publisher: Developmental Studies Center Website: www.devstu.org Focus Population: 1 and 2; Intervention 2 and 3 Level of Intervention: Supplemental and Intense Number of Levels: Beginning, Extension, Challenge Materials: Teacher’s Manual, Spelling-

student (e.g., an intervention to address reading fluency was chosen for a student whose primary deficit was in reading fluency). If the intervention is group-based, all students enrolled in the Tier 2/3 intervention group have a shared intervention need that could reasonably be addressed through the group instruction provided.

In the midst of Michel’s awakening to the sensuous, sensual existence, to the Dionysian world in nature and himself, he observes: [Marceline] led the way along a path so odd that I have never in any country seen its like. It meanders indolently between two fairly high mud walls; the shape of the gardens they enclose directs it leisurely course; sometimes it winds; sometimes it is broken; a .