AN EVALUATION OF ROUTINES ANALYSES WITHIN FUNCTIONAL BEHAVIOR ASSESSMENT by AARON C. BARNES A DISSERTATION Presented to the Department of Special Education and Clinical Sciences and the Graduate School of the University of Oregon in partial fulfillment of the requirements for the degree of Doctor of Philosophy December 2009 11 University of Oregon Graduate School Confirmation of Approval and Acceptance of Dissertation prepared by: Aaron Barnes Title: "An Evaluation of Routines Analyses within Functional Behavior Assessment" This dissertation has been accepted and approved in partial fulfillment of the requirements for the Doctor of Philosophy degree in the Department of Special Education and Clinical Sciences by: Cynthia Anderson, Chairperson, Special Education and Clinical Sciences Robert Horner, Member, Special Education and Clinical Sciences Richard Albin, Member, Special Education and Clinical Sciences Jean Stockard, Outside Member, Planning Public Policy & Mgmt and Richard Linton, Vice President for Research and Graduate Studies/Dean of the Graduate School for the University of Oregon. December 12,2009 Original approval signatures are on file with the Graduate School and the University of Oregon Libraries. © 2009 Aaron C. Barnes III An Abstract of the Dissertation of Aaron C. Barnes for the degree of in the Department of Special Education and Clinical Sciences to be taken IV Doctor of Philosophy December 2009 Title: AN EVALUATION OF ROUTINES ANALYSES WITHIN FUNCTIONAL BEHAVIOR ASSESSMENT Approved: _ Dr. Cynthia M. Anderson Procedures for direct observation as part of functional behavior assessment (FBA) in natural settings continue to be an important area of inquiry and evaluation in the field of education. Spread across a continuum of control and rigor, various direct FBA methods involve a variety of strengths and limitations. The purpose of this study was to evaluate the treatment utility of routines analysis when applied to direct observation as part of the function-based assessment and intervention process in general education classrooms. Central to this procedure is the use of routines analysis during the FBA interview to inform and develop direct observation conditions. This procedure was evaluated across 3 students in grades K-6. Data collected via this procedure showed utility when compared to traditional ABC observation methods such that clearer vindications of a hypothesized function of behavior were obtained. Interventions developed from the assessment data resulted in an observed decrease in problem behavior for each participant. Results of this study suggest the importance of routines analysis as a possible way to increase the efficiency and effectiveness of the FBA process. VI CURRICULUM VITAE NAME OF AUTHOR: Aaron C. Barnes PLACE OF BIRTH: Iowa City, IA DATE OF BIRTH: Apri124th, 1981 GRADUATE AND UNDERGRADUATE SCHOOLS ATTENDED: University of Oregon; Eugene Minnesota State University, Mankato; Mankato Luther College; Decorah, IA DEGREES AWARDED: Doctor of Philosophy, School Psychology, 2009, University of Oregon Master of Arts, Clinical Psychology, 2006, Minnesota State University, Mankato Bachelor of Arts, Psychology, 2003, Luther College AREAS OF SPECIAL INTEREST: Function-based Assessment and Intervention Systems-level Changes in Schools School-wide Models of Support Applied Behavior Analysis PROFESSIONAL EXPERIENCE: Assistant Professor, University of Wisconsin - Stout, Menomonie WI, Aug 2009-present Social Behavior Collaborative Planner, St. Croix River Education District, Rush City, MN, 2008-2009 Teaching Assistant, University of Oregon, 2005-2008 Minnesota State University, Mankato 2003-2005 Vll Research Team Member, Education and Community Supports, University of Oregon, 2005-2008 GRANTS, AWARDS AND HONORS: School Psychology Student-Faculty Representative, University of Oregon, 2007-2008 School Psychology Candidates Interview Weekend Student Coordinator, University of Oregon, 2008 Magna cum Laude, Minnesota State University, Mankato Regent's Scholar, Luther College PUBLICATIONS: Barnes, A. C., & Harlacher, J. E. (2008). Response-to-Intervention as a set of principles: Clearing the confusion. Education & Treatment o.fChildren, 31, 41 7-431. Cisler, J., Barnes, A. c., Farnsworth, D. & Sifer, S. (2007). Reporting practices of psychological research using a wait-list control: Current state and suggestions for improvement, International Journal o.fMethods in Psychiatric Research, 16, 34-42. Vlll ACKNOWLEDGMENTS I wish to express my sincere gratitude and appreciation first to Professor Anderson for her assistance and support throughout this process. She provided sound advice and professional judgment in formulating this dissertation from the beginning to end. Her support helped to see this project through and I am thankful for her commitment. Thanks also go out to Dr. B. Stiller and the teachers who participated in the study. Dr. Stiller's aid in recruitment, consultation, and collaboration was a major boon to this project. Thank you to Carla Zimmennan, Denise Ermatinger, Dianne Gilland and Amy Burrow for allowing me to conduct a study within your respective classrooms. Your commitment to the study provided valuable data for your students, schools, and the academic community at large. A special thank you to fellow school psychology students who participated in data collection: Shelley Mullen, Courtney Coughlin, Billie Jo Rodriguez, and Brad Cohn. Immense gratitude is reserved for R. Justin Boyd. Your assistance was invaluable in keeping this project moving towards completion during its final year. For my wife Emily. You have shared in the work and sacrifice. Now you share in the accomplishment. IX TABLE OF CONTENTS Chapter Page 1. INTRODUCTION AND LITERATURE REVIEW............................................ 1 Functional Behavior Assessment 2 Indirect Functional Behavior Assessment 3 Direct Functional Behavior Assessment 7 Descriptive Methods '" 7 Experimental Analysis 9 Structural Analysis. 11 Statement of the Problem 14 II. PHASE I: FUNCTIONAL BEHAVIOR ASSESSMENT 16 Method 16 Participants and Setting 16 Measure, Response Definitions and Interobserver Agreement 18 Procedure.................................................................................................. 23 FACTS Interview 23 Teacher Training 24 Comparison of Direct FBA Observations................................................. 24 Integrity Measurement 26 Data Analysis 29 Results 33 Andre 33 Mark 34 Renee 38 x Chapter Xl Page III. PHASE II: INTERVENTION ANALYSIS 41 Method 41 Participants and Setting 41 Response Definitions and Interobserver Agreement 41 Procedure 42 Results 46 Problem Behavior and Academic Engagement 46 Contextual Fit 50 IV. DISCUSSION 53 Summary of Findings 53 Study Limitations 56 Implications for Research and Practice 58 APPENDICES 62 A. CONSENT AND ASSENT FORMS 62 B. FUNCTIONAL ASSESSMENT CHECKLIST FOR TEACHERS AND STAFF (FACTS) 68 C. SELF ASSESSMENT OF CONTEXTUAL FIT 71 REFERENCES 76 XlI LIST OF FIGURES Figure Page 1. Comparison of Phase I Observation Conditions for Andre 35 2. Comparison of Phase I Observation Conditions for Mark 37 3. Comparison of Phase I Observation Conditions for Renee 39 4. Intervention Evaluation Data for Andre . 48 5. Intervention Evaluation Data for Mark 49 6. Intervention Evaluation Data for Renee 51 xiii LIST OF TABLES Table Page 1. Average Percent (range) ofInterobserver Agreement for Phase 1................. 22 2. Direct Observation Conditions and Description for Each Participant 27 3. Percentage of Session Time Scored with Antecedent Variables 31 1CHAPTER I INTRODUCTION AND LITERATURE REVIEW Functional behavior assessment (FBA) may be one of the most effective resources available for improving outcomes for students exhibiting problem behavior in today's schools. Generally speaking, FBA is a collection of techniques used to develop hypotheses about the environmental variables of which behavior is a function. This information can be used to design behavior support plans to increase desired behaviors and decrease problem behaviors. Functional behavior assessment, along with a functionally-derived intervention, can be described collectively as function-based supports. The utility of function-based supports has been well documented in clinical settings and other non-school settings (e.g., Carr & Durrand, 1985; Carr et al., 1999; Didden, Duker, & Korzilius, 1997; Iwata et al., 1982/1994;) as well as in schools (e.g., Dunlap, Kern-Dunlap, Clarke, and Robbins, 1991; Erickson, Stage, & Nelson, 2006; Roberts, Marshal, Nelson, Albers, 2001) Although IDEA reauthorization in 1997 often requires an FBA as part of individualized education plans (IEPs) for students exhibiting serious problem behaviors, educational professionals often find themselves without adequate training or tools necessary to access the potential benefits of function-based supports. Researchers in the area of function-based suppOlis have not done enough to develop, evaluate, and 2disseminate methods of functional behavior assessment appropriate for use in schools. As a result of this failure to bridge the research-to-practice gap, the degree to which function-based supports are accurately and efficiently utilized in school settings continues to suffer. In this chapter various methods for conducting FBA are described; special attention is given to FBA research in school settings. The chapter concludes with a summary statement of the current problem and an introduction to the purpose of the study. Functional Behavior Assessment The purpose of FBA is to generate information on the events preceding and following a target behavior; and to determine which events are reliably associated with the occurrence of the behavior. Events that occur before a target behavior are called antecedents, and those events that occur after the target behavior are called consequences. Types of antecedents can be further broken down into discriminative stimuli, or events that signal an increased likelihood that the target behavior will be followed by a specific consequence (i.e., reinforcement or punishment); and establishing operations, which are events that increase the potency of a particular reinforcer at a particular time. Consequences may be classified as either positive or negative reinforcement or positive or negative punishment. The use ofpositive or negative refers to the respective introduction or removal of environmental stimuli, while reinforcement or punishment refers to either increasing or decreasing the future probability of the target behavior occurrmg. 3Types of FBA are often delineated based upon the methods used to collect information. These methods can be either indirect or direct. Both types are desctibed below, along with examples, advantages, and limitations. Indirect Functional Behavior Assessment Indirect assessments involve gatheting information about behaviors of interest from the perspective of another - either the target individual, or someone who is familiar with the target individual (e.g. parent, teacher, or mentor). Interview or rating scale formats may be used for this type of assessment. Examples of functional assessment interviews include the Functional Assessment Interview for Teachers and Staff (FACTS; March et aI., 2000), the Functional Assessment Interview (FAI; O'Neill, Homer, Albin, Sprague, Storey, & Newton, 1997), the Preliminary Functional Assessment Survery (Dunlap et aI., 1993), the Problem Identification Interview (PH; Bergan & Kratochwill 1990), and the Student-Guided Functional Assessment Interview (Reed, Thomas, Sprague, & Homer, 1997). Commonly cited rating scales and checklists include the Problem Behavior Questionnaire (PBQ; Lewis, Scott, and Sugai, 1994), the Motivational Assessement Scale (MAS; Durand & Crimmins, 1998), and the Teacher Functional Behavioral Assessment Checklist (TFBAC; Stage, Cheney, Walker, & LaRocue, 2002). Each gathers information in order to develop hypotheses regarding the function of the problem behavior. Differences remain, however, in the amount and type of information collected. Some interviews and rating scales focus solely on a limited number of potential consequences for problem behavior (e.g. sensory stimulation, escape, attention, access to tangible rewards), while others collect information regarding potential antecedent events, 4descriptions of problem behavior and/or appropriate behavior, or the likelihood of problem behavior occurring during a particular time or in a particular place. Indirect assessments have the potential to provide large amounts of infonnation in relatively small amounts of time, are typically recommended practice during the early stages of a multimethod FBA process (Floyd, Phaneuf, & Wilezynski, 2005), and individuals from a wide range of behavioral expertise can be trained to administer (but perhaps not score) these tools. In addition, they are useful for behaviors that occur only infrequently or are covert-thus rendering direct observation difficult. Studies focusing on empirical evidence for the use of indirect methods have produced mixed results; some studies demonstrate good reliability and validity whereas others report limitations. Four major areas of technical adequacy have been identified in the literature. These areas, interrater reliability, convergent validity between sources, convergent validity with other indirect measures, and convergent validity with direct observations, are discussed below. Examining interrater reliability and convergent validity between sources involves administering the indirect measure to multiple infonnants who may have varying degrees of familiarity with the target student. Ratings can be examined for agreement on identifying problem behaviors, setting events or antecedents, and consequences or functions ofbehaviof. Studies have found initial evidence ranging from low to high interrater reliability between multiple teachers as well as between teachers and students (Barton-Arwood, Wehby, Gunter, & Lane, 2003; Borgrneier, 2003; Kinch, Lewis-Palmer, Hagan-Burke, & Sugai, 2001; Murdock, O'Neill & Cunningham, 2005; Reed, Thomas, Sprague, & Homer, 1997; Zarcone, Rogers, Iwata, Rourke, & Dorsey, 1991). Agreement 5between raters is generally highest regarding problem behaviors and consequences, and lowest regarding setting events. Interviews appear to show the strongest convergent agreement between students and teachers (Kinch et aI.). Evaluating convergent validity with other indirect measures involves administering distinct interviews or rating scales to one or more individuals and comparing the results. Much of the evidence for convergent validity in this area is poor, whether examining agreement between interviews and rating scales (Alter, Conroy, Mancil, & Hayden, 2008; Kwak, Ervin, Anderson, & Austin, 2004), or between different rating scales (Barton-Arwood et aI., 2003; Stage et aI., 2006). To further demonstrate the validity of indirect methods, researchers and practitioners often attempt to provide evidence that the functional relation identified by the interview or rating scale coincides with the functional relation identified by another, demonstrably accurate, measure (Shriver, Anderson, & Proctor, 2001). Studies making these types of comparisons often utilize systematic, direct observation methods (described next) as a means of comparison. Several studies have found limited evidence of convergent agreement between indirect and direct measures (Alter et ai. 2008; Kwak et aI., 2004; Murdock, O'Neill, & Cunningham, 2005; Stage, Cheney, Walker, & La Rocque, 2002; Stage et aI., 2006). Agreement regarding maintaining consequences of a behavior has been particularly low in many of these comparisons. Overall, poor, inconsistent, or limited evidence for technical adequacy of indirect FBA measures in the areas of interrater reliability, convergent validity between sources, convergent validity with other indirect measures, and convergent validity with direct 6observations, justify further exploration of these measures. Because of these concerns, the use of indirect methods often is encouraged in conjunction with other direct methods (Carr et aI., 1999). Depending on a wide range of factors (e.g. detail of tools, skill of assessor, skill of respondent as an observer and reporter, attention given to routines and events associated with problem and desired behaviors, etc), the utility of indirect assessment results for planning behavioral interventions can vary greatly. Information provided by these measures may not be sufficient to develop adequate supports for behaviors of increased intensity and in contexts of increased complexity. To the extent that reliability and validity of indirect methods are a concern, there may be risk of either Type 1 error (concluding a particular function of behavior when, in fact, the behavior is not primarily maintained by that function) and Type 2 error (failing to identify the primarily maintaining consequence of a particular behavior) obscuring the assessment conclusions. The FACTS interview is unique amongst indirect methods ofFBA because it involves first identifying routines (e.g., academic tasks, unstructured times) during which problem behavior most often occurs and then guides the interviewer to identify relevant environmental variables for those specific routines. This is a significant strength of the FACTS as topography is not the same as function; the same response (e.g., hitting others) can occur in different situations but be maintained by different consequences. For example, a child might hit peers during math class because she routinely is sent to the office and hence avoids working on math problems (negative reinforcement) but hitting on the playground results in attention from her friends (positive reinforcement). A review 7of recent studies using the FACTS interview suggests that this indirect assessment produces similar hypotheses to more direct methods ofFBA (described below) and is useful for developing effective interventions (McIntosh et al 2008). Direct Functional Behavior Assessments Next different types of direct functional assessment are reviewed. These methods are characterized by information gathering via direct observation as opposed to the report of another person. The various direct methods may be differentiated by the degree of control associated with their use. Descriptive methods are reviewed first followed by experimental methods as these represent two ends of the continuum of control. Finally, structural analyses are discussed as they incorporate aspects of both descriptive and experimental methods. Descriptive Methods On the lower end of the control continuum lie descriptive assessments. These methods involve recording instances of target behaviors as well as the environmental events that immediately precede or follow those behaviors (e.g., Erickson, Stage, & Nelson, 2006; Lerman & Iwata, 1993; Newcomer & Lewis, 2004; Penno, Frank, & Wacker, 2000; Roberts, Marshal, Nelson, Albers, 2001). Descriptive assessments most often are carried out in the setting in which problem behavior reportedly occurs, and environmental events are not manipulated or altered in any way. For example, Roberts et al. worked with three elementary students and conducted observations during various school activities while measuring off-task behavior and associated environmental variables. More specifically, task type (academic or non-academic) and difficulty 8(instructional or frustrational) were recorded as antecedent stimuli and escape, teacher attention, or peer attention were recorded as consequences. Results were used to form hypotheses regarding events that preceded and maintained off-task behavior. For all participants, problem behavior occurred more often during frustrational or difficult tasks than during instructional or non-academic tasks for all three students. Escape from difficult tasks was hypothesized as a maintaining consequence. As shown in this example, environmental events associated with problem behavior may be observed and noted during descriptive assessements, particularly if the observation is conducted in the general setting where problem behavior has been repOlied to occur. Such assessments are typically conducted, however, without explicit efforts to ensure the presence of variables that are likely to co-occur with problem behavior. Because descriptive assessments are conducted in natural settings and because environmental variables are not manipulated, they may allow for observation of naturally occurring relations between environmental variables and an individual's behavior. The extent to which these relations are clearly identified has significant implications for the ease and effectiveness of using the assessment results to design and implement behavior supports. Because events are not manipulated systematically, however, hypotheses cannot be verified (Iwata, Volmer, & Zarcone, 1990). Another potential limitation to descriptive assessments stemming from a lack of environmental control is that if environmental events of interest occur infrequently or if multiple events tend to occur in close temporal proximity to the target behavior, it may be difficult to develop hypotheses about functional relations between environmental variables and the target response. Finally, 9simply documenting that an event often precedes or follows the response does not demonstrate that the event is functionally-related to the event. Experimental Analysis On the "high-control" end of the continuum lie experimental methods. Experimental methods of functional assessment are called functional analyses and use appropriate methods of single-subject research design to demonstrate causal relations between environmental events and problem behavior. These methods involve systematically manipulating environmental variables that are hypothesized to evoke and maintain problem behavior (e.g., Carr & Durand, 1985; Hall, 2005; Harding et aI., 1999; Iwata, Dorsey, Silfer, Bauman, & Richman, 1982/1994; Peck, Sasso, Stambaugh (1998). Arguably the "gold standard" of functional analysis is the analog functional analysis developed by Iwata et ai. (1982/1994). The analog typically is run in a controlled setting by trained experimenters and consists of three to five test conditions each designed to test a different hypothesis about events that evoke and maintain problem behavior. Commonly evaluated antecedents include attention deprivation, task presentation, and removal of preferred items. Consequences tested include attention delivery, removal of items, and access to preferred items. The analog functional analysis has been used to identify environment-behavior relations and has successfully been used to develop interventions in a large number of-largely clinic-based-studies. This method has also been used in more basic research to study environment-behavior relations (cf Lennan, 2003). 10 Other methods of functional analysis exist as well including "antecedent only" methods and brief functional analysis. In antecedent only functional analysis, antecedent stimuli are manipulated while no programmed consequences are delivered for problem behavior (e.g., Carr and Durand, 1985) Recent attention has been given to ways the analog functional analysis might be shortened and a commonly used approach is the brief functional analysis (e.g. Northrup et aI., 1991; Watson & Sterling,1998 ; Wilder, Chen, Atwell, Pritchard, & Wienstein, 2006). In brief functional analyses, a shortened experimental design is used; for example only 1-2 sessions per condition may be conducted or sessions or run for brief periods (e.g., 5 min). For example, Northrup et al. examined the behavior of three individuals exhibiting self-injurious and aggressive behavior. Pmiicipants were systematically presented with one to two trials of two or four conditions. Each condition lasted from five to ten minutes. These conditions resembled those from a traditional analog functional assessment. A specific condition was associated with elevated rates of problem behavior for each participant. To demonstrate functional control, a contingency reversal condition was also run for each participant, in which the consequence associated with highest levels of problem behavior (e.g. tangible/activity or attention) was only presented after a predetermined appropriate behavior such as signing "please," or saying, "please come here." Levels of problem behavior were reduced in contingency reversal conditions for each participant.Because experimental methods focus on manipulating variables in a systematic and controlled manner, they are useful for demonstrating clear functional relations between environmental events and problem behavior. Limitations to these methods include the 11 high level of training needed to carry them out (Peck, Sasso, & Stambaugh, 1998), and also that their treatment utility in natural settings-especially when the functional analysis is conducted in an controlled setting-may be low (Anderson & Long, 2002; English & Anderson, 2006; Carr, Yarbrough, & Langdon, 1997; Van Camp et aI., 2000). For example, English and Anderson compared caregiver and experimenter-conducted analog functional analysis in a laboratory setting and a specific type of structural analysis (described next) conducted in the natural setting with three children diagnosed with developmental disabilities exhibiting severe problem behavior (e.g., self-injury, aggression). Different patterns of responding were observed across methods of functional assessment. English and Anderson went on to evaluate interventions based on each method of assessment in participant's natural settings. For all participants, interventions based on the structural analysis-conducted in the natural setting-were more effective then interventions based on the functional analyses. Structural Analysis Structural analysis is similar to descriptive assessment in that observations take place in the natural setting and typical caregivers are involved, but in structural analysis some environmental variables are controlled systematically. All structural analyses involve manipulating antecedent variables however different approaches to consequent stimuli have been taken (e.g., Anderson & Long 2002; English & Anderson, 2006; Peck, Sasso, Stambaugh, 1998; Stichter, Sasso, & Jolivette, 2004). In one type of structural analysis developed by (Peck and Sasso 1997; Stichter et aI., 2004; Stitchter & Conroy, 2005) antecedent variables are selected for manipulation based upon a service-provider's 12 recommendation and unstructured observations. Environmental variables associated with either the occurrence or the nonoccurrence of problem behavior may be manipulated. Importantly, in this type of structural analysis, consequent stimuli are not recorded; instead the focus is on the proportion ofproblem behavior that occurs in the presence of different antecedent stimuli. Most often, the maintaining consequence (e.g., escape as a reinforcer in the presence of difficult academic tasks) is assumed. Studies conducted using this method have demonstrated potential for identifying and verifying relevant antecedent variables and developing effective interventions in schools. For example, a teacher working with Stichter et al. indicated that a student's problem behavior was most likely to occur when there was significant background noise in the classroom, and that problem behavior was much less likely to occur when students not present or making noise in the classroom. The student was systematically exposed to repeated conditions of high and low levels of noise, structure, and social interaction, and those conditions associated with lower levels of problem behavior (particularly high structure) were noted. When the student was presented with tasks involving high structure and moderate levels of noise and social interaction, researchers observed fewer instances of problem behavior compared to a control condition involving moderate levels of structure, noise, and social interaction. Although structural analyses such as those conducted by Stichter and her colleagues are useful for identifying putative establishing operations and discriminative stimuli, they provide little information about consequences that might reinforce problem behavior. In contrast, an alternative method of structural analysis, the structured 13 descriptive assessment (SDA) developed by Anderson and her colleagues (Anderson & Long, 2002; Anderson, English, & Hendrick, 2006; English & Anderson, 2006) can be used to develop hypotheses about the entire three- or four-term contingency. During the SDA, pre-determined antecedent variables are systematically manipulated, using a multi- element design while problem behavior and consequences for problem behavior are not altered. Data are collected using real time scoring on a variety of environmental events- regardless of whether they precede or follow problem behavior. Events typically scored include delivery or removal of attention, delivery or removal of requests to complete tasks, and delivery or removal of preferred activities. Data are analyzed to detennine the proportion of problem behavior evoked by various events and as well the proportion of problem behavior followed by a given event. For example, data analysis might reveal that, when attention from adults is not available, 75% of all problem behavior is followed by adult attention. In contrast, problem behavior never occurred when adult attention was available. The SDA has been used in a number of studies and found to be useful for identifying functional relations and building effective interventions (e.g., Anderson & Long, 2002; English & Anderson, 2006). A limitation ofthe SDA is that, to date, conditions have been conducted using fairly standard antecedent variables-this method may be more useful for typically developing individuals ifmore idiographic stimuli were chosen and manipulated. Relative to both unstructured observations (in which no variables are manipulated) and functional analyses (typically conducted in controlled settings and involving standardized environmental events) structural analyses may provide an 14 increased likelihood to observe relevant behavior-consequence relations, and therefore lead to more conclusive and accurate hypotheses regarding environment-behavior relations. Another advantage of using structural analyses in natural settings may be the comparatively low training and expertise requirements required. Training and fluency in behavior assessment may be required to design conditions and interpret results, but service providers and peers are often required only to "respond as you normally would" to problem behaviors. Formal functional analyses, by comparison, require the precise creation and manipulation of environmental stimuli. Of course, a limitation of structural analyses is that functional control is not demonstrated. In addition, if-in the case of the SDA-problem behavior is followed by a variety of consequences, it may be difficult to develop a hypothesis about which event or events actually maintain the response. Statement of the Problem As described above, a variety of methods of FBA exist, each with strengths and limitations. Analog and other experimental methods of functional analysis are valued because they allow for a demonstration of functional control but such methods often are unrealistic for use in schools (Peck, Sasso, Stambaugh, 2005; Stichter & Conroy, 1998). Structural methods of functional analysis may be useful but no clear guidelines exist for identifying potentially relevant antecedent conditions under-which observations should be conducted. For researchers and those with a great deal of expertise in behavior analysis, this is less problematic however many practitioners may struggle to identify environmental conditions to test. Although indirect methods ofFBA often are criticized as unreliable, the FACTS interview stands out as a potentially very useful tool as (a) it 15 takes only a short amount of time to complete and (b) because it focuses on relevant routines, it may be more useful for developing hypotheses about environment-behavior relations. The purpose of this study is to build on existing research on the FACTS interview and subsequent direct FBA methods by evaluating the treatment utility of conducting a FACTS interview to identify relevant routines within which observations (direct FBA) could be conducted. After completion of the FACTS interview, an alternating treatments design was used to compare results of observations during the specific activity suggested by the FACTS to contain relevant antecedent variables to the more general context in which a typical ABC might be conducted. Finally, interventions were implemented to assess the treatment utility of the routines-analysis of the FACTS. To summarize, the present study aims to contribute to existing literature on FBA methodology and utility by addressing the following research questions: (a) whether observed levels of problem behavior differ between the routines-based, traditional ABC, and control conditions; (b) whether suggested functions of behavior provided by the FACTS interview and direct observation data were in agreement; and (c) whether there existed a functional relation between an intervention based upon the hypothesized function of problem behavior and a decrease in problem behavior? The present study consisted of two phases, the functional behavior assessment phase and the intervention analysis phase. They are described below, beginning with the methods and results of Phase I, followed by the methods and results of Phase II. 16 CHAPTER II PHASE I: FUNCTIONAL BEHAVIOR ASSESSMENT Method In Phase I the FBA was conducted. A FACTS interview was conducted with the teacher of each participant. The interview was followed by a comparison of the routines- based observation to a traditional ABC observation. Participants and Setting Three typically developing children exhibiting problem behavior in school settings participated. Participants were recruited from public schools in the Pacific Northwest. All three schools were implementing School Wide Positive Behavior Supports (SWPBS) across a three-tiered model as documented by scoring 80% or higher on the School-wide Evaluation Tool (SET; Homer et aI., 2004). Scoring 80% or higher on this tool is suggested as a criterion for meaningful outcomes by the authors. Procedures associated with SWPBS (e.g., established behavior support teams, student-referral processes) may have aided in the recruitment and willingness of school personnel to participate. Participants were recruited in several steps. First, researchers were contacted by school behavior support teams as they identified students and teachers that seemed appropriate and expressed a willingness to participate. Researchers then contacted the classroom teacher to provide information and answer questions. If teachers provided their informed 17 consent to participate, they contacted parents to assess parental interest and, if the parent was interested, sent a consent fonn home to be signed. When teacher and parent consent were obtained, the teacher ananged a meeting between a researcher and the student, during which the student had opportunity to provide assent. Teacher, parent, and student consent/assent fonns can be found in Appendix A. Andre was a kindergarten student who received 100% of his instruction in the general education setting. He was refened to the study due to disruptive behavior. Andre's teacher indicated that he was most likely to engage in disruptive behavior during independent reading and writing tasks. More specifically, she reported that Andre engaged in crying, whining, and refusal (e.g. saying "I can't do it," "I'm dumb," or "I don't know how.") All observations for Andre took place during reading and language arts instruction where students were expected to cycle between large-group, small-group, and independent work tasks. Mark was a 2nd grade student who received 100% of his instruction in the general education setting. He was refened to the study due to disruptive behavior. Mark's teacher indicated that he was most likely to engage in talking with or touching, poking, or kicking peers during large group reading instruction. All observations for Mark took place during moming reading instruction, which was primarily delivered in a whole- class, large-group fonnat. Renee was a 6th grade student who received 100% of her instruction in the general education setting. She was refened to the study due to disruptive and off-task behavior. Renee's teacher indicated that she was most likely to engage in off-topic comments 18 directed to peers and adults, and to be out-of-seat during large-group instruction. All observations for Renee took place in a general education classroom incorporating lecture, discussion and independent work tasks. Measure, Response Definitions and Interobserver Agreement The Functional Assessment Checklist for Teachers and Staff (FACTS, March et al. 2000) was administered to all teachers prior to completing the comparison of direct observation conditions. Using the FACTS, the researcher spent 20-25 minutes interviewing the participant's teacher to identify the problem behavior, develop an operational definition, identify routines in which the problem behavior often occurred, and developed a hypothesis statement specifying events that evoked and maintained problem behavior. A copy of the FACTS is included in Appendix B. During direct observation, data were collected on pre-defined student and teacher behaviors and on relevant contextual variables. All observation data were collected using a real-time data collection system on laptop computers. For students, data were collected on problem behavior and one or more relevant appropriate responses. Responses were identified and defined in the FACTS interview. Prior to conducting the comparison of direct observation conditions, an initial observation was conducted to verify the definitions. Problem behavior for both Andre and Mark was identified as disruption defined as any vocalization or physical interaction that was not related to the current academic task. For Andre, disruption tended to include crying, whining, and refusal (e.g. saying "I can't do it," "I'm dumb," or "I don't know how.") For Mark, disruption entailed behaviors such as talking at or touching, poking, or kicking 19 peers. For Renee, the problem behavior identified was talk-outs defined as any verbal statement made during instruction when the expectation was to remain quiet or when the student made statements without raising a hand or without being prompted (e.g., when the teacher had not asked for comments or answers from the group or the target student). Follow-up with the classroom teacher confirmed talk-outs as behaviors of concern. During data collection, disruption was measured on a partial interval system, and talk- outs were measured via frequency count. Because each student's targeted behavior was most likely to occur during academic work, academic engagement was selected as an appropriate behavior. Academic engagement was defined as (1) following teacher requests within 10 s (2) eyes oriented toward teacher or relevant materials for academic task, and (3) completing in-class tasks as requested by the teacher. Academic engagement was scored using duration recording. In addition, there was a 3-s delay for scoring the onset and offset of academic engagement to control for discrete instances of behavior (e.g., briefly looking at the teacher or relevant materials) that were scored only if they continued beyond the 3-s delay. Data were collected as well on teacher responses, peer responses (as relevant) and on key contextual events. Teacher responses were scored as a partial-interval measure across consecutive 5-s intervals. Data on the following teacher responses were collected: request, scored when a teacher provided an academically-related instruction or prompt to the student using verbal statements or physical guidance; escape allowed, scored when, in the absence of academic engagement, requests were not delivered for one complete interval; teacher attention, scored when the teacher interacted with a student either 20 verbally or physically, in a non-instructional manner (e.g. high five, pat on the back, saying "good job Although we initially coded positive/neutral attention separately from negative teacher attention, these codes were combined in order to facilitate the analysis of environment-behavior relations. Data were collected as well on peer attention (also scored separately as positive and negative but combined for analysis of environment- behavior relations). Peer attention was scored whenever a peer interacted with the student verbally or physically, in a manner other than what was related to instruction (e.g. "way to go," "How was your weekend?," high-five, "stop it," or "be quiet"). Interobserver agreement data were collected for 31 % of all sessions for each participant. Agreement for frequency measures was calculated by dividing sessions into consecutive 5-s intervals, comparing observers' records for each interval, dividing the smaller number of recorded responses by the larger, and averaging the proportions across the session and multiplying by 100 to obtain an agreement coefficient. For all responses coded as partial interval measures, total agreement, OCCUlTence agreement, and nonOCCUlTence agreement were calculated. Total agreement was calculated by dividing the number of intervals in which both observers agreed on the occurrence or nonOCCUlTence of the response by the total number of intervals and multiplying by 100 to obtain a percentage of agreement. OCCUlTence only agreement was calculated by dividing the total number of intervals both observers agreed a response occulTed by the number of intervals either observer scored a response and multiplying by 100. Non-occulTence agreement was calculated by dividing the total number of intervals both observers agreed 21 a response did not occur by the total number of intervals either observer did not score a response and multiplying by 100. Prior to beginning data collection, each data coder participated in three two-hour training sessions and a final one-hour training session. The first session involved reviewing the measures and procedures. The second and third sessions involved practicing observations using videos and calculating interobserver agreement. Before baseline data were collected, each observer conducted a minimum of two observations in a classroom setting with the researcher. Practice sessions continued until agreement coefficients were 80% or higher for all target responses during two consecutive practice sessions. If total agreement fell below 80% for three consecutive sessions at any time during the study, the data collector ceased data collection and was re-trained until the 80% cli terion was reached. For problem behavior, total agreement averaged 96% (range = 91 % - 98%), occurrence only averaged 88% (range = 0% - 100%), and nonoccurrence only averaged 96% (range = 91 % - 100%). For academic engagement, total agreement averaged 93% (range = 82% - 100%), occurrence only averaged 91 % (range = 82% - 100%), and nonoccurrence only averaged 93% (range = 83% - 100%). For adult attention, total agreement averaged 98% (range = 90% - 99%), occurrence only averaged 91 % (0%- 98%), and nonoccurrence averaged 98% (range = 91 % - 100%). For peer attention, total agreement averaged 93% (range = 80% - 100%), occurrence only averaged 82% (range = 0% - 100%), and nonoccurrence averaged 93% (range 83% - 100%). Interobserver agreement data are shown for each participant in Table 1. 22 Table 1. Average Percent (range) ofInterobserver Agreement for Phase I Total Agreement Occurrence Only Non-occurrence Only Andre Problem Behavior 96% .95% 97% (91 % - 100%) (89%-100%) (91%-100%) Academic Engagement 92% 90% 97% (89%- 94%) (85%-97%) (93%-100%) Adult Attention 98% 98% 98% (94%-99%) (97%-98%) (94% - 100%) Peer Attention 82% 75% 85% (80%- 98%) (74%- 96%) (83% -100%) Mark Problem Behavior 96% 93% 97% (92%-100%) (89% - 100%) (94%-100%) Academic Engagement .89% 86% 91% (82%- 99%) (82%- 99%) (83% - 100%) Adult Attention .98% 91% 98% (90%- 99%) (86%-96%) (91%-99%) Peer Attention 94% 91% 94% (90%- 98%) (86%- 99%) (93%- 96%) 23 Table 1. (Continued). Total Agreement Occurrence Only Non-occurrence Only Renee Problem Behavior 98% 00% 98% (98% - 98%) (00%- 00%) (98% ~ 98%) Academic Engagement 99% 98% 92% (98% - 100%) (98% - 100%) (84% - 100%) Adult Attention 98% 58% 98% (97%-99%) (00%-67%) (97%-99%) Peer Attention 99% 50% 99% (98% - 100%) (00% - 100%) (98%-100%) Procedure The FACTS interview was conducted first with teachers. Next, teacher training occurred and then direct observations were conducted. FACTS Interview The FACTS interview was administered to all participating teachers by the primary researcher for all participants except Renee; her FACTS was conducted by a doctoral student in the school psychology program with extensive expertise in function- based supp0l1s. The interview was administered in an area away from other students or adults-to maintain confidentiality-at a pre-arranged time selected by the teacher. When the FACTS interview was completed, a summary hypothesis statement was developed for each routine and presented to the teacher to ensure that he or she agreed 24 with the relevant antecedent and consequent stimuli. All teachers agreed with the hypothesis statement generated. The hypothesis statements generated for each participant were as follows. During independent reading and writing tasks, Andre engages in disruption including crying, whining, yelling (e.g., "I can't do this," "I don't know how.") maintained by adult attention. During large group reading instruction, Mark engages in disruption including talking to, touching, poking, or kicking peers maintained by peer attention. During large-group instruction in reading/writing, Renee talks out of tum and this behavior is maintained by adult attention Teacher Training Prior to conducting direct observations, written information was provided regarding the purpose ofthe various assessment conditions. In addition, once the assessment began, verbal or written feedback was provided following each session, and teachers were given opportunity to contact the researcher with any questions. During sessions, prompts to reestablish relevant antecedent conditions were provided as described below. Comparison a.!Direct FBA Observations A multi-element design was used to compare outcomes of observations conducted in the relevant routine identified in the FACTS interview to observations conducted without regard to the presence or absence of that routine-as typically occurs during ABC methods ofFBA. The assessment was conducted during the activity (e.g., math) in which problem behaviors most often occurred, as identified during the teacher interview and three conditions were conducted with each participant. Data collection occurred during 25 similar times of day across all sessions for each participant. For Andre, the activity during which the assessment was conducted was independent tasks during reading and writing; for Mark it was large group instruction during reading, and for Renee it was large group instruction during writing. A routines-based condition, a traditional ABC condition, and a control condition were developed for each participant. The order of conditions was determined randomly except that no one condition occurred more than twice in succession. Conditions consisted of 10-min sessions and sessions were repeated until differentiated responding was observed via visual inspection. A minimum of three sessions were conducted per condition. In all conditions, teachers were given specific instructions-described next- regarding how to set up the antecedent conditions but were asked to work with the student and respond to problem behavior as they typically would, as if observers were not present. In the routines-based condition, the specific antecedent stimulus hypothesized to evoke problem behavior on the FACTS was present. For all participants the antecedent stimulus involved presentation of instructional activities. Teachers were asked to present the activity as they typically would and to respond to problem behavior as usual. If, in the absence of academic engagement or problem behavior, the teacher did not re-establish the presence of independent reading/writing, large group reading, or large-group writing for Andre, Mark, or Renee, respectively, within 2 minutes, the teacher was prompted by a researcher to re-establish that relevant antecedent stimulus. For example, an observer might say to the teacher, "could you please ask Andre to begin working on independent 26 work again," or, "could you please ask Mark/Renee to attend to the group discussion or lecture once again?" During the traditional ABC condition, observations were conducted during the problematic activity indicated on the FACTS, however the presence of the specific antecedent identified as a discriminative stimulus or establishing operation was not controlled. For each participant, the traditional ABC condition was conducted during reading/writing, reading, and writing for Andre, Mark, and Renee respectively. During this condition, any stimuli that typically were presented (e.g., difficult assignments, group work, "easy" assignments) might have occurred. As in the routines-based condition, the general activity was re-established after two consecutive minutes if, in the absence academic engagement or problem behavior, the teacher did not prompt the student to return to the currently expected task. During the control condition the general routine was present however the variable identified as a SD or EO was absent. Thus, for Andre, independent work did not occur, and for Mark and Renee, large group instruction did not occur. In this condition, if the antecedent stimulus manipulated in the routines-based condition was present in this condition for more than 2 min, the teacher was asked to please cease the activity. The conditions conducted with each participant are summarized in Table 2. Integrity Measurement The primary issue of integrity during Phase I was the manipulation of antecedent variables. Particularly during the routines-based and control conditions, relevant, predefined environmental variables (e.g. difficult or easy math tasks, respectively) would Table 2. Direct Observation Conditions and Description for Each Participant 27 Condition Description Andre Routines-based condition Independent task with reading/writing content. Traditional ABC condition Reading/writing content (e.g., large-group, small-group, independent work). Control condition Small-group reading/writing instruction. Mark Routines-based condition Large-group reading instruction. Traditional ABC condition Reading instruction (e.g., large-group, small group, independent work, partner reading). Control condition Partner Reading. Renee Routines-based condition Large-group writing instruction Traditional ABC condition Control condition Writing content (e.g., large-group, small- group, independent work) Independent writing task 28 occur frequently in their respective conditions, but would occur rarely, if ever, in the opposing condition. Further, a variety of antecedent stimuli (e.g., difficult math, group work, easy math) would occur during the traditional ABC condition however the general routine (math) should be present. The fidelity of implementation of each condition was assessed by calculating the percent of session time during which predefined environmental variables were scored. For the routines-based condition, any session in which the specific antecedent stimulus (e.g., difficult math) did not occur during at least 50% of intervals was not included in the data analysis. During the traditional ABC condition, any session in which the general context (e.g., math content of some type) did not occur at least 50% of intervals was not included in the data analysis. Finally, for the control condition, any session in which the key variable for that condition (e.g., easy math) did not occur during at least 50% of intervals was not included in the data analysis. The presence of the various antecedent variables for each participant and session can be seen in Table 3. In the table, the conditions conducted and the relevant environmental variables for each condition are in the left column. The top row depicts possible antecedent variables. Throughout the table, percentages in each row illustrate the mean amount of session time scored with a particular antecedent across the various conditions. For example, for Andre, during the routines-based condition, large group, small group or partner instruction did not occur. Fmiher, independent work, the antecedent controlled for in this condition, occurred an average of 94% of session time with a standard deviation across the 5 sessions of 6.27%. Across pmiicipants, the putative SD/EO occurred most often during the routines-based condition and rarely or not at all during the control 29 condition. The general context was present during the majority of all traditional ABC observations. These data suggest the routines-based, traditional ABC and control conditions were implemented with fidelity. Data Analysis When observations were completed for each student, the data were examined to develop hypotheses about environment-behavior relations. Line graphs were developed to evaluate trends within and across conditions. In addition, contingency space analyses (Martens et aI, 2008) were calculated to evaluate the relation between environmental variables and student behavior. To facilitate calculations, all responses scored as frequency measures were converted to partial interval data by dividing each observation session into continuous 5 s intervals. For these calculations, a response was coded as being followed by a particular event each time the event was scored within the current or subsequent interval. For example, an interval scored with disruption would be scored as being followed by adult attention if that event was recorded in the same or subsequent interval. If adult attention was recorded in the same interval but before disruption was recorded, those events were not scored as an instance of disruption followed by adult attention. If problem behavior was recorded in three consecutive intervals, and the third instance of problem behavior was followed by adult attention within the third or subsequent interval, only the third instance of problem behavior was scored as being followed by adult attention. To facilitate analyses of the data, contingency space analysis (CSA) (Martens et aI, 2008) was conducted for each observation session from Phase 1. This involved 30 calculating and graphing the difference between the probability of a consequence given a behavior, and the probability of a consequence given the absence of a behavior. Dividing the number of intervals in which a behavior was followed by a consequence by the total number of intervals in which the behavior was scored provided values between 0 and 1 for the probability of a consequence given a particular behavior. For example, if disruption was scored in 30 intervals and was followed by attention in 25 of those 30 intervals, then the resulting proportion would be .83. Dividing the number of times a particular consequence was scored in the absence of problem behavior by the number intervals during which the behavior was not scored provided a value between 0 and 1 for the probability of a consequence given the absence of a behavior (i.e., base rates of a consequence). For example if attention was scored in the absence of disruption in 6 intervals and there were 42 intervals dming which disruption was not scored, the resulting proportion would be .14 (indicating a low base rate of attention; attention rarely occurred in the absence of problem behavior). Resulting proportions were summed across sessions and divided by the number of sessions per condition to glean mean proportions for each condition. These were graphed in coordinate space such that the probability of a given response given the presence or absence of a consequence could be viewed. On a contingency space analysis graph, values were shown in relation to each other and to a diagonal line drawn through the graph, showing where two points would be plotted if the values were equal (i.e., a pm1icular consequence was as likely to be delivered following a behavior as it was delivered when the behavior had not occurred). 31 Table 3. Percentage ofSession Time Scored with Antecedent Variables Mean Percent (SD) Small Condition Large Group Independent Group Partner Unstructured and key variable Instruction Work Instruction Work Andre Routi nes-based Independent seat- work 94% (6.27) 6% (6.26) Traditional ABC 28% 39% 16% General reading/writing (33.75) (35.86) 16% (6.76) (13.46) Control Small group instruction 2% (3.36) 98% (3.36) Mark Routi nes-based Large-group reading 97% (5.25) 3% (5.25) Traditional ABC General reading 67% (44.5) 33% (44.50) Control 93% Partner reading 6% (11) 1% (2) (10.30) Note. Dashes indicate that a particular antecedent variable was not observed during a particular condition. 32 Table 3. (Continued) Mean Percent (SD) Small Condition Large Group Independent Group Partner Unstructured and key variable Instruction Work Instruction Work Renee Routines-based Lecture Reading/writing 97% (4.76) Traditional ABC General reading/writing 29% (35.79) 61 % (30.39) 10% (20) Control Independent seat- work 100% (0) Note. Dashes indicate that a particular antecedent variable was not observed during a particular condition. For example, consider a problem behavior such as disruptive contact, defined as any peer to peer contact occurring during large-group instruction. During observations, disruptive contact occurred in 35 intervals and was followed by peer attention 18 times. This would result in a proportion of .51, where problem behavior was followed by peer attention 51 % of the time. Ifpeer attention was delivered when no problem behavior had occurred 3 times throughout the observation, and there were 85 intervals total scored without 33 problem behavior, this would yield a proportion of .04, where peer attention was delivered in the absence of problem behavior 4% of the time. Data such as these might suggest peer attention as a maintaining function of the problem behavior, and would also be compared to calculations for other potential maintaining variables. Results Phase I of the present study investigated (1) whether observed levels of problem behavior differed between the routines-based, traditional ABC, and control conditions, and (2) whether functional relations suggested by the FACTS interview, traditional ABC observation, and routines-based structural analysis were in agreement. First, data are presented regarding levels of problem behavior across sessions of the functional behavior assessment. Second, results on the CSA for each pmiicipant's assessment data are presented. Results for problem behavior across conditions of the functional assessment are depicted in Figure 1 for Andre, Figure 2 for Mark, and Figure 3 for Renee. For all figures, the top panel depicts the percent of intervals the antecedent variable most strongly associated with problem behavior was present. The second panel from the top shows the percent of intervals scored with problem behavior or the number of instances of problem behavior recorded across sessions. The bottom three panels depict the CSA results for routines-based, traditional ABC, and control conditions. Andre For Andre, the putative relevant antecedent variable, independent reading or writing, occurred far more often during the routines-based condition then either the 34 traditional ABC or control condition (top panel, Figure 1). As depicted in the second panel of Figure 1, problem behavior was scored most often during independent seat-work during reading/writing. For the routines-based, traditional ABC, and control conditions respectively, problem behavior occurred during an average of32%, (range 26%- 37%), 10% (range = 5% - 13%), and 10% (range = 5%-18%) of intervals. The contingency space analysis shown in the third and fourth panels ofFigure 1 indicates that during the routines-based condition, teacher attention was positively contingent on disruption and was more likely to occur following problem behavior than either peer attention or escape. Adult attention was delivered following problem behavior approximately 40% of the time, and was delivered only 7% ofthe time when no problem behavior had occurred. During the traditional ABC condition, both adult attention and peer attention were only slightly more likely to be delivered following disruption than to occur during other times. No dependencies were found during the control condition, as shown in the fifth panel where all points are plotted near to or on the diagonal line. Taken together, the results of the alternating treatments design for Andre support the utility of the routines-based analysis as problem behavior occurred more often during this condition, allowing for more frequent observation of behavior-consequence relations. Further, the contingency between problem behavior and teacher attention was stronger in the routines-based analysis. Mark Results obtained with Mark are in Figure 2. As is shown in the top panel of the Figure, although the putative SD/EO, large-group instruction, occurred most consistently ..c 100% ~ 35 +J ...... ..'3 .... "0 80%Q) ..><: ~Routines-based .... .... 0 0 u 5 60% ~Traditional ABCVl c +JC0 Q) 'U; "0 40%U'> cQ) Q)Vl a. "I- Q) 20%0 "0 +J C C Q) u 0%.... Q) Cl... 0 2 4 6 8 10 12 14 16 ...-----~------~ .;!1 .Ql00% ~ rn ~ 80% -> .... ..c Q) Q) I +J CD 60% ic "l- E I 0 Q) 40% -, +J ..c 20% Jc 0Q) ....u Cl... 0% -I .... ..cQ) Cl... +J '3 0 2 4 6 Session 8 ~Routines-based ~Traditional ABC "",*"""Control 10 12 14 16 Routines-based <> Peer Attention D Adult Attention A Escape 0.2 0.4 0.6 0.8 1 p(consequence/no problem behavior) 1 , i i E 0.8 -I Peer Attention D Adult Attention A Escape <> Peer Attention D Adult Attention AEscape Figure 1. Comparison of Phase I Observation Conditions for Andre. 36 during the routines-based condition, it was present for the majority of intervals during three sessions of the general condition as well. Perhaps as a result of this, results were somewhat undifferentiated between the routines-based and traditional ABC conditions (second panel Figure 2). Importantly, although intervals scored with disruptive behavior were relatively stable during the routines-based condition (when the presence oflarge group instruction was stable), during the traditional ABC condition, problem behavior was scored most often during sessions when large group instruction occurred and less often when that variable was absent problem behavior occurred during an average of33% (range 38%-23%), 32 % (range 13%-60%), and 13% (range 5%-24%) in the routines- based, traditional ABC, and control conditions respectively. The CSA of Mark's data, shown in the bottom panels of Figure 2, indicate that across the routines-based, traditional ABC, and control conditions, peer attention was positively contingent upon disruption and was more likely to occur following problem behavior than either adult attention or escape. Further, these contingencies were stronger during the routines-based condition where peer attention was delivered approximately 42% of the time following disruption and only approximately 1% of the time when disruption had not occurred. During the traditional ABC condition, peer attention was delivered approximately 34% of the time following disruption and only 1% of the time when disruption had not occurred. During the control condition-partner reading-peer attention was likely to follow problem behavior 39% of the time, but was also likely to be delivered 23% of the time when disruption had not occurred. 2--A1r--*.....~J--,-----,-------,-------, ~Control ~Traditional ABC 37 ~Routines based 12 1410864 100% • 80% 60% J 40% I 20% l 0% ~-----,---­ o -cw u :;; a.. u 100% I w (;(; I u ';;: 80% J V'l I1l '" .r: I1l w 1: CD 2 E 60% I c w :0 4- ~0 40% - a.. c ~w u '~:;; 20% ~a.. ~Routines based ~Traditional ABC ~Control 0% +-----,--------,--------,-------,-------,----~'----- o 2 4 6 Session 8 10 12 14 1 Peer Attention A Escape IE] Teacher Attention Traditional ABC 0.2 0.4 0.6 0.8 p(conseqence/no problem behavior) o 1 Peer Attention A Escape IE] Teacher Attention Routines-Based 0.2 0.4 °16 0.8p(consequence/no prOD em behavior)o 1 l .ID.8 -1 ~l~ c I1l I w.r: '~.4t g I u ~ CU.2 I o -1',---r-,-----,------,---,----, Control 1 E :to.8 o 0.- ...... ~ W;~ C I1l W .r: J w 3P~ c o ~.2 Teacher Attention A Escape 10.2 0.4 / O.h l b n.8.pfconsequence no prOD em e'Fiavror o -!lIXlII-----.~7""_-,_--___,---__,__,_____, o Figure 2. Comparison of Phase I Observation Conditions for Mark. 38 Taken together, the results of the alternating treatments design for Mark support the utility of the routines-based analysis as problem behavior occurred more often during this condition and because problem behavior occurred during the traditional ABC condition only when that variable was present. The contingency between problem behavior and peer attention was indicated across all three conditions, but was strongest in the routines-based analysis. Renee Results obtained with Renee are in Figure 3. As is shown in the top panel, the putative SD/EO (large group writing) occurred most consistently during the routines- based condition although it occurred during the majority of intervals of two sessions of the traditional ABC condition as well. Intervals scored with problem behavior are in the second panel. Problem behavior occurred most consistently during the routines-based condition and, with the exception ofthe first session, never during the control condition. In contrast, problem behavior occurred somewhat variably during the traditional ABC condition and comparison of the first two panels does not reveal a clear relation between the presence oflarge group writing and problem behavior in this condition. The CSA of Renee's data, shown in the bottom panels of Figure 3, indicate that, across conditions, teacher attention was positively contingent on problem behavior although this relation was strongest during the routines-based condition where adult attention followed problem behavior 38% of the time problem behavior was scored and only 4% of all adult attention was scored independent of problem behavior. Interestingly, 39 14 -EJ-Traditional ABC --$-Routines based ........Control 121086 _------4~~--.$-----.C.::J 1I: 19 <>=OJ 0.8[>D ro ~ --' c -5 OJ 0.6'".~ ~ 0- C 0.00 c 0.4'v; :f: '"OJ 3:Vl ..... 0.20 ~ c OJ 0t' OJ 0- 0 2 4 --$-Routines based -EJ-Traditional ABC 4 ~ ::J ? 3-" ro I- ..... 0 2 '"OJ U c 2 1 '"E 0 0 2 4 6 8 10 ........Control 12 14 Session 1 !!J.8 OJ :n ~.~ -... 0~ 'S; C ro f'~ I]] c ~.2 Routines-Based Condition • Peer Attention L1Teacher Attention A Escape Traditional ABC • Peer Attention L1Teacher Attention A Escape o 0.2 0.4 0.6 0.8 p(consequence/no problem behavior) 1 o 0.2 0./4 0.6 b h 0.8)p(consequence no pr05'lem e aVlor 1 1 Control E 0.8 OJ :n L.Q·6 QJ a ~ 's; ~ -!l.4 0'.0 ~ g 0.2 u c: <>. eer Attention L1Teacher Attention A Escape 01A----."L----~------_ o p(consequence/ Rci~roblem behavior) 1 Figure 3. Comparison of Phase I Observation Conditions for Renee. 40 peer attention was positively contingent on problem behavior during the traditional ABC and control conditions but no contingency was observed during the routines-based condition as peer attention was never scored. Finally, escape was positively contingent on problem behavior only during the traditional ABC condition; escape was not scored during the other two conditions. Results obtained with Renee support the utility of the routines-based observations relative to a traditional ABC analysis only tentatively. Problem behavior did occur more in the routines analysis condition however different consequences were observed across conditions and the validity of these consequences as reinforcers is unknown. In sum, across two of three participants, the routines-based analysis resulted in the clearest patterns of responding and hypotheses about response-consequence relations. The validity of these hypotheses is unknown however. Thus, the purpose of Phase II was to assess validity indirectly, via an evaluation of the treatment utility of hypotheses gleaned from each assessment. 41 CHAPTER III PHASE II: INTERVENTION ANALYSIS Method The purpose of this phase was to evaluate interventions developed based upon the results of Phase 1. This allows for an indirect assessment of the validity of the various observation methods by focusing on treatment utility. Interventions were matched to the hypothesized function of problem behavior (as suggested by Phase I results). Interventions were implemented and evaluated under the same conditions as the routines- based assessment condition as this allowed for interventions to be evaluated during conditions in which problem behavior was most likely to occur. Upon this study's completion, multi-component interventions were developed and implemented for each participant as appropriate. Participants and Setting Participants and settings were as per Phase 1. Interventions were carried out during the routines-based condition identified in Phase 1. Across all Phase II sessions, the routines-based variable associated with occurrence of problem behavior was present for Andre, Mark, and Renee an average of 98%, 96% and 99% of session time respectively. Response Definitions and Interobserver Agreement Definitions of problem behavior, contextual variables and teacher/peer responses were as per Phase 1. Interobserver agreement data were collected for 30% of sessions 42 using the same procedure as in Phase I. For problem behavior, total agreement averaged 98% (range = 91 % - 100%), occurrence only averaged 90% (range = 0% - 100%), and nonoccurrence only averaged 97% (range = 91 % - 100%). For academic engagement, total agreement averaged 96% (range = 85% - 100%), occurrence only averaged 91 % (range = 82% - 100%), and nonoccurrence only averaged 88% (range = 0% - 100%). For adult attention, total agreement averaged 97% (range = 0% - 100%), occurrence only averaged 81 % (0% -100%), and nonoccurrence averaged 97% (range = 96% - 100%). For peer attention, total agreement averaged 97% (range = 82% - 99%), occurrence only averaged 82% (range = 0% - 99%), and nonoccurrence averaged 98% (range 94%- 100%). Procedure Antecedent conditions were established as per Phase I and intervention sessions were 10 min in duration. All intervention phases were conducted until stability was observed via visual inspection. A minimum of three sessions were conducted for each phase of the intervention. Appropriate single subject designs were used to assess functional control. Each intervention was matched to the hypothesis statement suggested in Phase I and was designed to assess the treatment utility of consequences identified as potentially reinforcing. For Andre, both the routines-based and traditional ABC conditions suggested that problem behavior might be maintained by adult attention and, to a lesser extent, peer attention. For Mark, a positive contingency between problem behavior and peer attention was observed during all three conditions of the alternating treatments design and for Renee, the routines-based condition suggested only teacher 43 attention to be reinforcing whereas the traditional ABC condition suggested that teacher attention, peer attention, and escape might be reinforcing. For Andre, the intervention analysis was conducted during independent work. To test the hypothesis that problem behavior was maintained by adult attention. Andre was taught to approach and check in with the teacher after completion of a teacher selected portion (that should take approximately 3-5 min) of work. Prior to the OCCUlTence of independent work, Andre's teacher, reviewed the material, checked for understanding, and told Andre to complete a portion (approximately 3-5 min) of the task then check in with the teacher to share his work. When Andre checked in, his teacher reviewed the work and provided verbal feedback for between 20 and 40 s and told Andre to complete the next segment of the task. The classroom teacher was instructed to ignore disruptive behavior whenever possible and to provide brief re-direction in a neutral tone of voice only if Andre's behavior presented a safety risk or was prohibiting the learning of his peers. Across intervention sessions, adult attention was delivered for the appropriate duration following successful completion of a segment of work for 94% of opportunities. This represents approximately 8% of intervals being scored with adult attention following appropriate behavior compared to 1% of intervals scored with adult attention following problem behavior (see Table 4). In order to assess for functional control, Andre's intervention was evaluated using an ABAB reversal design. The results of the FBA for Mark suggested that disruptive behavior occulTed most often during large-group reading instruction and was maintained by peer attention. Because the instructional context was not conducive to increasing the amount of peer 44 Table 4. Data Concerning Fidelity ofImplementation for Phase II Interventions Andre Mark Renee Pal1icipant Intervention Fidelity Data Adult attention delivered following 94% of successful self-recruitment procedures. Without intervention: peer attention delivered following problem behavior 7.5% ofthe time. During intervention: peer attention delivered following problem behavior 3.5% of the time. During intervention: adult attention delivered following problem behavior 0% of the time. interaction, an intervention was developed with the goal of decreasing the amount of peer attention that followed disruptive behavior during group instruction. At the beginning of each intervention session, the classroom teacher held up a glass jar and reviewed the classroom expectations during large group instruction. The class was directed to respond to peers' disruptive behavior by modeling appropriate behavior and otherwise ignoring peer disruption. The classroom teacher told students that as she observed them meeting classroom expectations - and especially if she observed them modeling appropriate behavior during peer disruption - she would place marbles into the jar. When marbles in the jar reached certain levels, the class would earn rewards (e.g. extra recess, end-of-day 45 party, etc.). Across intervention sessions in which peers were directed to ignore problem behavior during large-group instruction, peer attention following problem behavior was delivered to Mark during 2.3% of intervals compared to during 7.5% of intervals whenever the intervention was not in place (see Table 4). An alternating treatments design was used to assess functional control for Mark's intervention. For Renee, the routines-based condition suggested only teacher attention to be reinforcing whereas the traditional ABC condition suggested that teacher attention, peer attention, and escape might be reinforcing. An examination of the antecedent stimuli present during problem behavior in the traditional ABC condition showed that each talk- out followed by peer attention or escape occurred during independent work, while 75% of talk-outs followed by adult attention occurred during large group instruction. Although variability was observed in the level of observed problem behavior during large group instruction during the alternating treatments design of Phase I as shown on the two upper panels of Figure 3, anecdotal report from the classroom teacher confinned large-group instruction as a primary routine of concern. The intervention was designed accounting for adult attention as the primary maintaining reinforcer for Renee's talk-out behavior. The intervention implemented (described next) allowed for flexibility to implement intervention components based on peer attention or escape following completion of the study if necessary. In order to facilitate Renee's receiving adult attention contingent upon attending to lecture and refraining from talk-outs, an intervention was developed based on the school's already existing PRIDE card intervention. At the beginning of the writing class, Renee would check-in with the teacher and was reminded of the expectation that 46 she remain academically engaged and refrain from engaging in talk-out behaviors. At the end of the period, the teacher would review the card, providing larger amounts of points and verbal feedback if Renee had successfully met the expectation and fewer points and minimal verbal feedback if Renee had failed to meet the expectations. Because the card was part of a larger intervention in the school, Renee did receive points and feedback for behaviors other than those targeted in the current study, but during check-in, teachers would emphasize the importance of academic engagement and talk-outs during reading and writing class. Adults were directed to ignore any talk-outs during intervention sessions. A copy of Renee's PRIDE card can be found in Appendix C. Across intervention sessions in which adults were directed to ignore problem behavior during large-group instruction and points and attention were delivered at the end of class contingent upon academic engagement, adults delivered attention to Renee 0 times following problem behavior (see Table 4). An ABAB reversal design was used to assess functional control for Renee's intervention. Results In this section, data are presented regarding problem behavior and academic engagement recorded during the intervention evaluation phase. Problem Behavior and Academic Engagement Results for problem behavior and academic engagement across the assessment and intervention phases are depicted in Figure 4 for Andre, Figure 5 for Mark, and Figure 6 for Renee. All data for this phase were collected during conditions matching the routines-based structural analysis for that participant. Because all teachers expressed 47 desire for a timely intervention, new baseline data were not collected, instead the routines-based sessions were used as baseline. As shown in Figure 4, an ABAB reversal design was used to assess functional control for Andre's intervention. Upon implementation of the intervention, a 73% reduction in intervals scored with problem behavior was observed (left panel). Following a return to baseline to establish functional control, the intervention was re-instutited. In this final phase, an 84% reduction in problem behavior (relative to the initial baseline) was observed. The intervention resulted in changes in academic engagement for Andre as well (bottom panel, Figure 4). Although initially variable, academic engagement increased steadily upon implementation of the intervention. Following a return to baseline, academic engagement again increased and a 257% increase in academic engagement relative to the initial baseline was observed. Mark's intervention was evaluated using an alternating treatments design in which baseline conditions (no marble jar) were compared to the intervention (Figure 5). During baseline, the teacher had the students divided into four groups and students earned or lost tick marks contingent on appropriate and inappropriate behavior. There was no system for redeeming tick marks for additional rewards. While the intervention was in place, a 77% reduction in disruption was observed (top panel, Figure 5). Mark's observed levels of academic engagement increased during intervention as well (bottom panel, Figure 5). Although initially variable, following initial baseline 48 BL Self - recruit BL Self-recruit 100 c 0 90 '';:::; C. ::l 80.... VI 0 _Disruption ~ 70 ...... '3 60 "C Q) .... 500 u Vl ~ 40 ro ~i>....Q) 30......C 4- 200 ...... c 10Q) u .... Q) 0- 0 0 5 10 15 20 Session I 100 II ~ I 90 Iro I u I 'E 80 IIQ) I "C I ro 70 IU I « I Q) 60 IE II i= "C IQ) 50 IC tlO ../:0 ro'';:::; tlO 40ro c > L.U .... Q) 30VI .0 0 204- 0 _Academic Engagement ...... 10c Q) u .... 0Q) 0- 0 5 10 15 20 Session Figure 4. Intervention Evaluation Data for Andre. 49 100 c 0 90 '';:; 0.. :::J 80.... Vl .......... MarbleJar0 .c 70 -l-' '3 60 -c OJ .... 50 Baseline (Tick Marks)0 u Vl 40~ ro > 30....OJ -l-' C 20 4- 0 -l-' 10c OJ u 0....OJ 0- 0 2 4 6 8 10 12 14 16 Session .......... MarbleJar 161412 Baseline (Tick Marks) 106 8 Session 42 100 .:?: ro 90 u E 80OJ -c ro 70u « OJ 60E i= -cOJ 50c tlD 0 ro '';:; tlD ro C 40 > UJ .... OJ 30Vl .0 0 4- 200 -l-' C 10OJ u .... OJ 00- 0 Figure 5. Intervention Evaluation Data for Mark. 50 observed levels of academic engagement were 39% higher than during baseline conditions. As shown in Figure 6, an ABAB reversal design was used to assess functional control for Renee's intervention. Observed talk-outs gradually increased during baseline. Upon implementation of the intervention, observed talk-outs increased for one session before decreasing (top panel, Figure 6). During the first intervention phase, a 50% reduction in talk-outs was observed. Following a return to baseline to establish functional control, the intervention was re-instutited. In this final phase, an 89% reduction in problem behavior (relative to the initial baseline) was observed. Changes in academic engagement were observed during Renee's intervention as well (bottom panel, Figure 6). Although initially decreasing, academic engagement levels were variable across the first intervention session and a return to baseline. Although functional control was not demonstrated, academic engagement increased upon reinstating the intervention and a 22% increase in academic engagement relative to the initial baseline was observed. Contextual Fit Contextual fit was assessed after data collection had ended for each participant using the Self-Assessment of Contextual Fit in Schools (Horner, Salentine and Albin, 2003). Classroom teachers were asked to complete the 16-item questionnaire as it applied to the specific intervention developed for each participant. The contextual fit questionnaire can be found in Appendix C 51 Modified Modified BL Point BL Point 6 Card Card 5 V1 ...... ::::l 0 4I~ co I- ..... 0 V1 3(]) u c co ...... V1 C "'Cl 2 (]) C (]) Talk-outsV1 .0 0 1 0 ,,-- .......,...." ... _-. 0 5 10 15 20 25 Session BL BL Modified 100% j't'"tCard......c (]) E 80%(]) tlO co AcademictlOc UJ Engagement u 'E 60%(]) "'Cl co u « ..c. ...... 40%'~ c 0 'Vi V1 (]) Ul 20%..... 0 ...... c (]) u ... (]) 0- 0% 0 5 10 15 20 25 Session Figure 6, Intervention Evaluation Data for Renee. 52 Overall, classroom teachers rated the interventions high on contextual fit (M = 4.96). All items were rated as "5" or higher, except for "I have received any training I need to be able to implement this support plan," which Mark's teacher rated as a "2", and "Implementing this behavior plan will not be stressful" which Andre's teacher rated as a "4". 53 CHAPTER IV DISCUSSION The goal of this study was to evaluate the utility of conducting a routines-analysis to guide the conduct of direct observations. Specifically, this study examined whether direct observations conducted during a specific routine identified in a routines-analysis conducted via interview would yield more conclusive results regarding environrnent- behavior relations. This was assessed via an alternating treatments design in which observations were conducted in the presence of that relevant routine and during a more general condition in which the general context was similar but the presence of the relevant routine was not controlled for. A control condition-during which the specific routine was absent-was conducted as well. Finally, interventions based on hypotheses from the routines-based observations were implemented to assess the intervention utility of routines-based direct observations. Summary of Findings In this section, results are summarized and discussed in relation the research questions. Whether observed levels ofproblem behavior d(ftered between the routines-based, traditional ABC, and control conditions? Whether greater levels 0.[problem behavior 54 were observed when the relevant routine identifiedfrom the FACTS interview was present? Phase I of this study utilized an alternating treatments design to compare the observed levels of problem behavior across the routines-based, traditional ABC, and control conditions. Overall, problem behavior was recorded more often during the routines-based condition than during traditional ABC or control conditions. This was most clear while observing Andre, and the relevant routine identified from the FACTS, independent reading/writing tasks, rarely occurred during traditional ABC observations. For Mark's data, there was considerable overlap between observed levels of problem behavior, but a closer look at the data revealed that during each traditional ABC observation showing high levels problem behavior, the relevant routine-large-group reading instruction-was present for 80% or more of the session. Differences across observation conditions were less clear when looking at Renee's data. Across four sessions of the routines-based condition, during which large- group instruction was present for a minimum of90% of the time, observed instances of problem behavior gradually increased. There was, however, overlap between the frequency of talk-outs observed during the routines-based and traditional ABC conditions. Because the traditional ABC condition may have presented a variety of antecedent variables within the general context of reading and writing, it was important to determine what specific routines coincided with problem behavior during those sessions. During the first traditional ABC session, problem behavior was observed only in the presence of the relevant routine, and during the second session one instance of problem 55 behavior was observed during the absence of large group instruction. The third traditional ABC session showed 3 instances of problem behavior occurring when large group instruction was not present. During the fourth session, no instances of problem behavior were recorded. The relatively low frequency of Renee's target behavior may have contributed to the lack of clarity in these data. Are the suggested/unctions o.fbehavior provided by the FACTS interview, traditional ABC observation, and routines-based observation in agreement? This question was evaluated by comparing the primary hypothesized function from the FACTS interview to the contingency space analyses for each participant. Overall, the data analyzed from the routines-based condition provided the clearest indication of hypothesized function. For each participant the suggested function of problem behavior from the FACTS was supported by direct observation data. Is there afimctional relation between an intervention based upon the hypothesized .fimction o.fproblem behavior and a decrease in problem behavior? This question was evaluated using either an ABAB reversal design or an alternating treatments design. Overall interventions based upon the hypothesized function of problem behavior were functionally related to reductions in problem behavior. Although academic engagement was not the primary dependent variable of interest in this study, and functional control was not clearly established for all participants, there was some indication that the interventions implemented may have increased academic engagement. Data for each participant showed variability and some overlapping data points between baseline and intervention phases, but mean levels of 56 academic engagement were generally higher while interventions were being implemented. Study Limitations The alternating treatments and reversal designs used in this study allowed for the comparison of various conditions and to evaluate the utility of routines analyses within functional behavior assessment. This section describes several limitations of the present study. Several limitations for this study relate to the potential benefit of collecting additional data. One such limitation is the amount of data points collected during the Phase I alternating treatments designs. Particularly when looking at Mark's and Renee's data, where variability of observed problem behavior and overlap between routines-based and traditional ABC observation data necessitated a careful analysis of which routines coincided with problem behavior during the traditional ABC conditions, clarity of visual and quantitative analyses may have improved if the alternating treatments design had been extended. The length of the Phase II intervention analysis is another limitation that may have been mitigated if additional data had been collected. Due to time constraints, Phase I data were used for initial baseline, and phases of ABAB reversal designs were generally transitioned as soon as reasonable arguments could be made regarding functional control. Although observation conditions during Phase II matched those of the routines- based condition for Phase I, a separate baseline for intervention analysis would have allowed further demonstration of the stability of behavior before interventions were 57 implemented. Similarly, additional data collected during second baseline and intervention phases may have strengthened demonstration of the functional relations between the interventions and problem behavior. Finally, the lack of explicit data collection for variables relevant to integrity during intervention is another limitation of the current study. During intervention for Andre, this study was able to compare the percentage of intervals scored with adult attention following the successful completion of a segment of independent work with the percentage of intervals scored with adult attention following problem behavior. Intervention fidelity may have been better documented if further data had been collected regarding the details of Andre's behavior during the self-recruitment sequence. Specifically, the use of a fidelity check-list with a task analysis of the appropriate sequence may have simplified the task of assessing the fidelity of this intervention as opposed to relying upon session logs and raw data. Additionally, because adults were directed to provide attention if Andre's behavior represented a risk to safety or a serious disruption, it may have been beneficial to collect further data on the problem behavior that resulted in adult attention. During intervention for Mark, we were able to compare overall instances of adult attention delivered following problem behavior with those delivered at other times. Intervention fidelity would have been better documented had we recorded when Mark's disruption specifically involved peers and whether or not peer attention was delivered following those disruptions. 58 During intervention for Renee, intervention fidelity would have been better documented had we collected data for student and adult completion of the PRIDE point card, and specifically the degree to which attention was delivered contingent upon Renee meeting the expectations to remain academically engaged and refrain from talk-outs. Finally, two of the participants were primary school-aged males, while one participant was a middle school-aged female. In is unknown whether similar results could be expected with students of different age, sex, or ethnicity. Additionally, as the only female in the study, and showing problem behavior at relatively low rate, conclusions drawn from Renee's data may be tentative even when applied to other females her age. Implications for Research and Practice In this section, implications tor practice regarding routines-based functional assessment, including indirect methods and direct observation, will be discussed. In addition, implications for future research will be discussed. Implications for Practice The present study documented the utility of identifying relevant routines during an interview to guide direct observations. A routines-based direct observation can be a useful part of FBA in the classroom setting. When direct observations were focused on specific routines identified by the FACTS, higher rates of problem behavior were observed, and clearer indications of possible function of problem behavior were generated through CSA. As part of a multi-source FBA process, an indirect assessment like the FACTS can be used to gain valuable infonnation about specific routines associated with higher levels of problem behavior. This infonnation can be used to design 59 direct observation conditions that may lead to more effective and efficient direct observations. In addition to making direct observations more efficient, the routines-based analysis procedures utilized in this study may hold benefits for classroom teachers. This study suggested that teachers were able to implement the observation conditions we created with fidelity. They were involved with the direct observation portion ofthe FBA instead of being interviewed and eventually given an intervention plan to try. All teachers in this study rated their knowledge and awareness of the Phase II interventions highly on the contextual fit survey. Because the interventions developed are directly related to the data collected during the routines-based assessment condition, it may be that teacher participation during the direct observation contributed to their perceived knowledge and familiarity with the intervention plan. Implications for Research This section will outline potential areas for future research, based on the results and limitations of the present study. First, research regarding whether or not routines- focused FBA can lead to more effective and efficient direct observation will be discussed. Second, future studies on the agreement between various FBA methods will be discussed. The present study contributed to the literature supporting the use of routines- based observation conditions as part of FBA in a school setting. Many questions remain, however, regarding the extent to which routines-based observation may serve as part of an efficient and effective FBA process. One question in this area is whether or not higher rates of problem behaviors are observed during routines-based conditions compared to 60 other observation conditions. This was generally found to be true in the current study, but without looking deeper into the specific routines present during the traditional ABC condition, this was not always clearly indicated. Future research should include routines- based observation where a specific routine associated with problem behavior is present and can be compared to control conditions where the relevant routine is not present. Furthermore, future research involving direct observation similar to the traditional ABC condition should document the presence of specific routines. Repeated comparison of observed problem behavior in the presence and the absence of relevant routines may clarify the answer to this question. Another question that should be explored in future research is the consistency of suggested functions of behavior across various assessment methods. The present study generally found agreement between the FACTS and the various direct observation conditions. When agreement was not entirely clear between conditions, this was due to inconclusive or mixed results from data collected during traditional ABC or control conditions. Future research should repeat these comparisons in educational settings, and also examine data collected by observing a wider range of teachers, participant demographics, and target behaviors. Another question related to agreement between assessment methods is whether or not data collected during the routines-based condition offered clearer evidence of a suggested function of behavior when compared to the traditional ABC condition. This study generally found this to be the case via CSA calculations. Review of Mark's data suggested peer attention as the function of his disruptive behavior across all observation methods, but CSA results were stronger during the routines-based conditions. 61 Another area of for future research involves whether or not interventions based upon suggested functions are successful. Future research should make note of when separate or dual functions of behavior are indicated by the various observation methods and then compare the effectiveness of distinct interventions based upon those competing hypotheses. For example, if data from the routines based condition suggested adult attention as a primary function, and data from the traditional ABC condition suggested both adult and peer attention as functions of problem behavior, then Phase II of the study could incorporate a component analysis of an adult/peer attention-based intervention. Such a comparison could evaluate the relative effectiveness of a multi-function intervention compared to a single-function intervention, thus informing the relative intervention utility of the observation methods upon which they were based. Another limitation of the current study related to intervention evaluation was the amount of time interventions were in place. Phase II intervention and reversal data were only collected during 10-minute sessions. Data were not always collected on consecutive days, some retum-to-baseline phases were shOliened due to classroom scheduling constraints, and no long-term follow-up data were collected due in part to the fact that the school year was ending during intervention data collection. Future studies should address these limitations by collecting additional data over longer periods of time on a more consistent schedule, and by collecting follow up data. APPENDIX A CONSENT AND ASSENT FORMS 62 63 Informed Consent to Participate in a Research Study ParentiGuardian/Family/Student Consent The Impact of Functional Behavior Assessment Methods on Promoting Desired Behaviors and Decreasing Problem Behaviors Your son/daughter is invited to participate in a study conducted through the University of Oregon designed to evaluate behavior support methods used in schools. The study will be conducted by Aaron Barnes, under the supervision of Cynthia Anderson, both from the University of Oregon's College of Education. The purpose of the study is to examine how different assessment methods can aid educators in developing effective behavior supports for students. Your son/daughter was selected as a possible participant in this study because staff at his or her school believes he or she may benefit from additional behavior supports. The study will begin in September 2007 and end in June 2009. To provide students with effective and efficient behavior supports, participation by your son or daughter would involve: • Behaving as they normally do when presented with typical teacher requests (like completing a math assignment or transitioning from one activity to another). • Providing their own input or feedback regarding what a support plan might entail or what rewards they might like to work toward. • Participating in a designed support plan with knowledge of what behaviors are expected, which behaviors are inappropriate, and what consequences will follow appropriate and problem behaviors. To tailor the behavior supports to your child's needs, the teacher and school personnel would complete the following activities: • Review your child's academic and behavioral school records, social strengths and weaknesses, and attendance and discipline referral patterns, if applicable. • Complete an interview detailing your child's behavior in various settings and during various activities. To conduct the study, researchers from the University of Oregon will complete the following activities: • collaborate with school personnel to collect and review data • conduct direct observations of your child in his or her classroom to collect data on social behavior and/or academic behavior. 64 Your child will not be identified in written or professional presentations of the results of this study. Every effort will be made to organize information using altered names, and professional presentations will never refer to your child by name. In addition, all information will be kept in a lockable location and destroyed after the study and holding period are complete. There remains, however, a small risk that your student may be identified as a participant in this study. There is a distinct likelihood that your student may benefit from participation in the study. Function-based behavioral suppolis have shown promising results in previous studies where social and academic gains were documented for participating students. Your consent to your child's participation in the study is voluntary. Your decision whether or not to allow your child to participate will not affect your relationship with the school district or the instruction your child receives in his or her school. If you allow your child to participate, you are free to withdraw your consent and terminate your child's participation in the study at any time without penalty. Prior to your child's participation in the study, we will also ask your son/daughter ifhe or she give their assent to participate. Their assent will be necessary for participation in the study. If you have any questions, please feel free to contact Aaron Barnes at (541) 285-1077 or Cynthia Anderson at the University of Oregon (346-2671). If you have questions regarding your rights as a research subject, contact the Office of Human Subjects Compliance, University of Oregon, Eugene, OR 97403, (541) 346-2510. You have been given a copy of this form to keep. Your signature indicates that you have read and understand the information provided above, that you willingly agree to participate, that you may withdraw your consent at any time and discontinue participation without penalty, that you have received a copy of this form, and that you are not waiving any legal claims, rights or remedies. Parent/Legal Guardian _ Signature _ Name of Child -------------- Date -------- 65 Informed Consent to Participate in a Research Study Teacher Consent The Impact of Functional Behavior Assessment Methods on Promoting Desired Behaviors and Decreasing Problem Behaviors You are invited to participate in a research study conducted by Aaron Barnes, under the supervision of Cynthia Anderson, both in the University of Oregon's College of Education. The purpose of the study is to examine how different assessment methods can aid educators in developing effective behavior supports for students. The study will begin in September 2007 and end in June 2009. You were selected as a possible participant, because you currently teach students who might benefit from additional behavioral support. The Function-based supports in this study will generally involve: • Gathering detailed information about appropriate behaviors and problem behaviors. • Gathering detailed information about situations or settings in which these behaviors of interest are likely to occur. • Developing hypotheses for the function of targeted behaviors and developing interventions or behavior support plans based on those hypotheses. • Monitoring levels of appropriate and problem behaviors in order to evaluate the effects of the intervention. • Direct observation in your classroom by a researcher. If you choose to participate in the study, researchers will ask you to complete the following activities to maximize the benefit for the student: • Complete the Functional Assessment Checklist for Teachers and Staff (FACTS), a 20-25 minute interview with a researcher. This information will be used to identify the function of students' problem behavior. • Present the student with various tasks or requests (directed by the researcher) and responding to the student's behavior as you normally would. • Consult with a researcher to develop and implement behavior supports for the student. • Complete the Contextual Fit Checklist to provide your feedback regarding the behavior support plan and how well it matches with your skills, values, resources, and administrative support. This form typically takes about 10 minutes to complete. 66 Researchers from the University of Oregon will conduct direct observations of participating students in their classrooms to collect data on social and academic behaviors. Neither you nor the student will be identified in written or professional presentations concerning this study. Every effort will be made to organize information using altered names, and professional presentations will never refer to you or your student by name. In addition, all information will be kept in a lockable location, and destroyed after the study and holding period are complete. There remains, however, a small risk that you may be identified as a participant in this study. There is a distinct likelihood that your student may benefit from participation in the study. Empirical evidence suggests that function-based supports lead to students' social and academic gains. Your participation is voluntary. Your decision whether or not to participate will not affect your relationship with the school district. If you decide to participate, you are free to withdraw your consent and discontinue participation at any time without penalty. If you have any questions, please feel free to contact Aaron Barnes at (541) 285-1077 or Cynthia Anderson at the University of Oregon (346-2671). If you have questions regarding your rights as a research subject, contact the Office of Human Subjects Compliance, University of Oregon, Eugene, OR 97403, (541) 346-2510. You have been given a copy of this form to keep. Your signature indicates that you have read and understand the information provided above, that you willingly agree to participate, that you may withdraw your consent at any time and discontinue participation without penalty, that you have received a copy of this form, and that you are not waiving any legal claims, rights or remedies. Print Name ---------------------------- Signature _ Date ------------ 67 Student Assent The Impact of Functional Behavior Assessment Methods on Promoting Desired Behaviors and Decreasing Problem Behaviors We want to ask you if you want to be part of a research study. Your (parents, guardian, etc) has told us it is ok for you to be part of this project, but we want to ask you if you are willing to help us by being part of the study. If you are part ofthe study you will do function-based supports which means you will work with your teachers, parents, and a researcher to learn some ways to be more successful in school. We will have people come to watch you and the other students to see if the program is helpful, but you will work primarily with your teacher. If you choose to be part ofthe study, we will not use your name when we share with other people how you do in school. Nobody except the people you work with will know who you are. We expect to work with you for several weeks, and in the end you should have skills that will help you in school. If you choose to be part of the study, you can always change your mind, and if you choose not to be part of the study, it will not affect anything else about what you do at school. Do you have any questions? If you are willing to be pmi of the study we would ask you to sign this form. Student Signature: _ APPENDIXB FUNCTIONAL ASSESSMENT CHECKLIST FOR TEACHERS AND STAFF (FACTS) 68 69 Functional Assessment Checklist for Teachers and Staff (FACTS-Part A) Student/ Grade: _ Interviewer:__~ _ Date: _ Respondent(s): ~ _ Student Profile: Please identify at least three strengths or contributions the student brings to school: Problem Behavior(s): Identify problem behaviors _Tardy _ Unresponsive Withdrawn _ Fight/physical Aggression _ Inappropriate Language Verbal Harassment _Disruptive Insubordination Work not done Theft Vandalism Other _Verbally Inappropriate Describe problem behavior: _ Self-injury Identifying Routines: Where, When and With Whom Problem Behaviors are Most Likely. Schedule Activity Likelihood of Problem Behavior Specific Problem Behavior (Times) Low High I 2 3 4 5 6 I 2 3 4 5 6 I 2 3 4 5 6 I 2 3 4 5 6 I 2 3 4 5 6 I 2 3 4 5 6 I 2 3 4 5 6 I 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 Select 1-3 Routines for fUl'ther assessment: Select routines based on (a) similarity of activities (conditions) with ratings of 4,5 or 6 and (b) similarity of problem behavior(s). Complete the FACTS-Part B for each rontine identified. March. Homer. Lewis-Palmer, Brown. Crone. Todd & CmT (2000) 1/19/05 70 Functional Assessment Checklist for Teachers & Staff (FACTS-Part B) Student/ Grade: Date: _ Interviewer: Respondent(s): _ Routine/Activities/Context: Which routine (only one) from the FACTS-Part A is assessed? Routine/Activities/Context Problem Behavior(s) Provide more detail about the problem behavior(s): What does the problem behavior(s) look like? How often does the problem behavior(s) occur? How long does the problem behavior(s) last when it does occur? What is the intensityllevel of danger of the problem behavior(s)? What are the events that predict when the problem behavior(s) will occur? (Predictors) Related Issues (setting events) Environmental Features illness Other: _ reprimand/correction - Other - _drug use _structured activity _with peers _ negative social conflict at home _ physical demands - unstructured time _ activity too longAcademic failure - ~ _ socially isolated _ tasks too boring - tasks too difficult What consequences appear most likely to maintain the problem behavior(s)? Things that are Obtained Things Avoided or Escaped From - adult attention Other: - hard tasks Other: --peer attention _reprimands _ preferred activity ~eer negatives - money/things ~hysical effort - adultattention Identify the summary that will be used to build a plan of behaVIOr support. Setting Events & Predictors Problem Behavior(s) Maintaining Consequence(s) How confident are you that the Summary of BehaVIOr is accurate? Not very confident Confident 1 2 3 4 5 Very 6 What current efforts have been used to control the problem behavior? Strategies for responding to problem Strategies for preventing problem behavior behavior _ schedule change Other: __ reprimand Other: _ seating change - office referral _ curriculum change - detention APPENDIXC SELF-ASSESSMENT OF CONTEXTUAL FIT 71 72 Self-Assessment of Contextual Fit in Schools Homer, Salentine, & Albin, 2003 The purpose of this interview is to assess the extent to which the elements of a behavior support plan fit the contextual features of your school environment. The interview asks you to rate (a) your knowledge ofthe elements of the plan, (b) your perception of the extent to which the elements of the behavior support plan are consistent with your personal values, and skills, and (c) the school's ability to support implementation of the plan. This information will be used to design practical procedures that will help school personnel support children with problem behaviors. The information you provide will be maintained and reported in a confidential manner consistent with the standards of the American Psychological Association. You will never be identified. Please read the attached behavior support plan, and provide your perceptions of the specific elements in this plan. Thank you for your contribution and assistance. Name ofInterviewee: Role: --------------- -------- Support plan reviewed: _ Knowledge of elements in the Behavior Support Plan. 1. I am aware of the elements ofthis behavior support plan. 1 Strongly Disagree 2 Moderately Disagree 3 Barely Disagree 4 Barely Agree 5 Moderately Agree 6 Strongly Agree 2. I know what I am expected to do to implement this behavior support plan. 1 Strongly Disagree 2 Moderately Disagree 3 Barely Disagree 4 Barely Agree 5 Moderately Agree 6 Strongly Agree Skills needed to implement the Behavior Support Plan 3. I have the skills needed to implement this behavior support plan. 1 Strongly Disagree 2 Moderately Disagree 3 Barely Disagree 4 Barely Agree 5 Moderately Agree 6 Strongly Agree 73 4. I have received any training that I need to be able to implement this behavior support plan. No training needed _ 1 Strongly Disagree 2 Moderately Disagree 3 Barely Disagree 4 Barely Agree 5 Moderately Agree 6 Strongly Agree Values are consistent with elements ofthe behavior support plan 5. I am comfortable implementing the elements of this behavior support plan 1 Strongly Disagree 2 Moderately Disagree 3 Barely Disagree 4 Barely Agree 5 Moderately Agree 6 Strongly Agree 6. The elements of this behavior support plan are consistent with the way I believe students should be treated. 1 Strongly Disagree 2 Moderately Disagree 3 Barely Disagree 4 Barely Agree 5 Moderately Agree 6 Strongly Agree Resources available to implement the plan 7. My school provides the faculty/staff time needed to implement this behavior support plan. 1 Strongly Disagree 2 Moderately Disagree 3 Barely Disagree 4 Barely Agree 5 Moderately Agree 6 Strongly Agree 8. My school provides the funding, materials, and spaced needed to implement this behavior support plan. 1 Strongly Disagree 2 Moderately Disagree 3 Barely Disagree 4 Barely Agree 5 Moderately Agree 6 Strongly Agree 74 Administrative Support 9. My school provides the supervision support needed for effective implementation of this behavior support plan. I Strongly Disagree 2 Moderately Disagree 3 Barely Disagree 4 Barely Agree 5 Moderately Agree 6 Strongly Agree 10. My school administration is committed to investing in effective design and implementation of behavior support plans. 1 Strongly Disagree 2 Moderately Disagree 3 Barely Disagree 4 Barely Agree 5 Moderately Agree 6 Strongly Agree Effectiveness ofBehavior Support Plan 11. I believe the behavior support plan will be (or is being) effective in achieving targeted outcomes. 1 Strongly Disagree 2 Moderately Disagree 3 Barely Disagree 4 Barely Agree 5 Moderately Agree 6 Strongly Agree 12. I believe the behavior support plan will help prevent future occurrence of problem behaviors for this child. 1 Strongly Disagree 2 Moderately Disagree 3 Barely Disagree 4 Barely Agree 5 Moderately Agree 6 Strongly Agree Behavior Support Plan is in the best interest ofthe student 13. I believe this behavior support plan is in the best interest of the student. 1 Strongly Disagree 2 Moderately Disagree 3 Barely Disagree 4 Barely Agree 5 Moderately Agree 6 Strongly Agree 75 14. This behavior support plan is likely to assist the child to be more successful in school. 1 Strongly Disagree 2 Moderately Disagree 3 Barely Disagree 4 Barely Agree 5 Moderately Agree 6 Strongly Agree The Behavior Support Plan is efficient to implement 15. Implementing this behavior support plan will not be stressful. 1 Strongly Disagree 2 Moderately Disagree 3 Barely Disagree 4 Barely Agree 5 Moderately Agree 6 Strongly Agree 16. The amount of time, money and energy needed to implement this behavior support plan is reasonable. 1 Strongly Disagree 2 Moderately Disagree 3 Barely Disagree 4 Barely Agree 5 Moderately Agree 6 Strongly Agree 76 REFERENCES Alter, P. J., Conroy, M. A., Mancil, G. R., & Haydon, T. (2008). A comparison of functional behavior assessment methodologies with young children: Descriptive Methods and Functional Analysis. Journal ofBehavioral Education, 17,200-219. Anderson, C. M., English, C. L., & Hendrick, T. M. (2006). Use of structured descriptive assessment with typically developing children. Behavior Mod~fication, 30, 352- 378. Anderson, C. M. & Long, E. S. (2002). Use of structured descriptive assessment methodology to identify variables affecting problem behavior. Journal ofApplied Behavior Analysis, 35, 137-154. Barton-Atwood, S. M., Wehby, J. H., Gunter, P. L., & Lane, K. L. (2003). Functional Behavior assessment rating scales: Intrarater reliability with students with emotional and behavioral disorders. Behavioral Disorders, 28, 386-400. Bergan, J. R., & Kratochwill, T. R. (1990). Behavioral consultation and therapy. New York: Plenum Press. Borgmeier, C. (2003). An evaluation ofinformant cOl~fidence ratings as a predictive measure ofthe accuracy ofhypotheses.fromfitnctional assessment interviews. Unpublished doctoral dissertation, University of Oregon. Carr, E. G., & Durand, V. M. (1985). Reducing behavior problems through functional communication training. Journal ofApplied Behavior Analysis, 18, 111-126. Carr, E. G., Horner, R. H., Turnbull, A. P., Marquis, J. G., McLaughlin, D. M., McAtee, M. L., et al. (1999). Positive behavior support for people with developmental disabilities: A research synthesis. Washington, DC: American Association on Mental Retardation. Carr, E. G., Yarbrough, S. c., & Langdon, N. A. (1997). Effects of idiosyncratic stimulus variables on functional analysis outcomes. Journal ofApplied Behavior Analysis, 30, 673-686. 77 Didden, R., Duker, P. c., & Korzilius, H. (1997). Meta-analytic study on treatment effectiveness for problem behaviors with individuals who have mental retardation. American Journal ofMen tal Retardation, 101, 389-399. Dunlap, G., Kem-Dunlap, L., Clarke, S., & Robbins, F. R. (1991). Functional assessment, curricular revision, and severe behavior problems. Journal ofApplied Behavior Analysis, 24, 387-397. Durrand, V. M., & Crimmins, D. B. (1988). The Motivation Assessment Scale: An administration manual. Albany: State University of New York. English, C. L. & Anderson, C. M. (2006). Evaluation of the treatment utility of the analog functional analysis and the structured descriptive assessment. Journal of Positive Behavior Interventions, 8, 212-229. Erickson, M. J., Stage, S. A, & Nelson, R. J. (2006). Naturalistic study of the behavior of students with EBD referred fro functional behavioral assessment. Journal of Educational and Behavioral Disorders, 14, 31-40. Floyd, R. G., Phaneuf, R. L., & Wilczynski, S. M. (2005). Measurement properties of indirect assessment methods for functional behavioral assessment: A review of research. School Psychology Review, 34, 58-73. Hall, S. S. (2005). Comparing descriptive, experimental and informant-based assessments ofproblem behaviors. Research in Developmental Disabilities, 26, 514-526. Harding, J., Wacker, D. P., Cooper, L. J., Asmus, J., Jensen-Kovalan, P., & Grisolano, L. A (1999). Combining descriptive and experimental analysis of young children with behavior problems in preschool settings. Behavior Modification, 23, 316- 333. Homer, R. H., Todd, A. W., Lewis-Palmer, T. Irvin, L. K., Sugai, G., & Boland, J. B. (2004). The School-wide evaluation tool (SET): A research instrument for assessing school-wide positive behavior support. Journal ofPositive Behavior Interventions, 6, 3-12. Iwata, B. A, Pace, G. M., Dorsey, M. F., Zarcone, J. R., Vollmer, T. R., Smith, R. G., et al. (1994). Toward a functional analysis of self-injury. Journal 0.[Applied Behavior Ana(ysis, 27, 197-209. Iwata, B. A" Vollmer, T. R. & Zarcone, J. R. (1990). The experimental (functional) analysis of behavior disorders: Methodology, applications, and limitations. In A C. Repp & N. N. Singh (Eds.). Perspectives 0/1 the use o.[nonaversive and aversive intenJentionsfor persons with developmental disabilities (pp. 301-330). Sycamore, IL: Sycamore. 78 Kinch, C., Lewis-Palmer, T., Hagan-Burke, S., & Sugai, G. (2001). A comparison of teacher and student functional behavior assessment interview information from low-risk and high-risk classrooms. Education and Treatment o.[Children, 24, 480- 494. Kwak, M. M., Ervin, R. A., Anderson, M. Z., & Austin, J. (2004). Aggreement of function across methods used in school-based functional assessment with preadolescent and adolescent students. Behavior Mod(fication, 28, 375-401. Lerman, D. C. (2003). From the laboratory to community application: translational research in behavior analysis, Journal 0/Applied Behavior Analysis, 36, 415-419 Lerman, D. C. & Iwata, B. A. (2003). Descriptive and experimental analyses of variables maintaining self-injurious behavior. Journal 0.[Applied Behavior Analysis, 26, 293-319. Lewis, T. J., Scott, T. M. & Sugai, G. (1994). The problem behavior questionnaire: A teacher-based instrument to develop functional hypotheses of problem behavior in general education classrooms. Diagnostique, 19, 103-115. McIntosh, K., Borgmeier, C. J., Anderson, C. M., Homer, R. H., Rodriguez, B. & Tobin, T. (2008). Technical adequacy of the Functional Assessment Checklist-Teachers and Staff. Journal of Positive Behavior Interventions. March, R., Homer, R. H., Lewis-Palmer, L., Brown, D., Crone, D., Todd, A. W., & Carr, E. (2000). Functional Assessment Checklist/or Teachers and Staff(FACTS). Eugene: Department of Educational and Community Supports, University of Oregon. Martens, B. K., DiGennaro, F. D., Reed, D. D., Szczech, F. M., & Rosentthal, B. D. (2008). Contingency space analysis: an alternative method for identifying contingent relations from observational data. Journal 0.[Applied Behavior Analysis, 41, 69-81. Murdock, S. G., O'Neill, R. E., & Cunningham, E. (2005). A comparison of results and acceptability of functional behavioral assessment procedures with a group of middle school students with emotional/behavioral disorders. Journal 0.[ Behavioral Education, 14, 5-18. Newcomer, L. L. & Lewis, T. J. (2004). Functional behavioral assessment: An investigation of assessment reliability and effectiveness of function-based interventions. Journal 0.[Emotional and Behavioral Disorders, 12, 168-181. 79 Northup, 1., Wacker, D., Sasso, G., Steege, M., Cigrand, K, Cook, J. et al. (1991). A brief functional analysis of aggressive and alternative behavior in an outclinic setting. Journal ofApplied Behavior Analysis, 24, 509-522. O'Neill, R. E., Homer, R. H., Albin, R. W., Sprauge, 1. R., Storey, K., & Newton, J. S. (1997). Functional assessment and program developmentfor problem behaviors: A practical handbook. New York: Brooks/Cole. Peck 1. & Sasso, G. M. (1997). Use of the structural analysis hypothesis testing model to improve social interactions via peer-mediated intervention. Focus on Autism & Other Developmental Disabilities, 12, 219-231. Peck, J., Sasso, G. M., & Stambaugh, M. (1998). Functional analyses in the classroom: Gaining reliability without sacrificing validity. Preventing School Failure, 43, 14- 18. Penno, D. A., Frank, A. R., & Wacker, D. P. (2000). Instructional accommodations for adolescent students with severe emotional or behavioral disorders. Behavioral Disorders, 25, 325-343. Reed, H., Thomas, E., Sprague, J. R., & Homer, R. H. (1997). The student guided functional assessment interview: An analysis of student and teacher agreement. Journal o.fBehavioral Education, 7, 33-49. Roberts, M. L., Marshall, J., Nelson, J. R., Albers, C. A. (2001). Curriculum-based assessment procedures embedded within functional behavioral assessments: Identifying escape-motivated behaviors in a general education classroom. School Psychology Review, 30, 264-277. Stage, S. A., Cheney, D., Walker, B., & LaRocque, M. (2002). A preliminary discriminant and convergent validity study of the teacher functional behavioral assessment checklist. School Psychology Review, 31, 71-93. Stage, S. A., Jackson, H. G., Moscovitz, K., Erickson, M. J., Thurman, S. 0., Wyeth, J., et al. (2006). Using multimethod-multisource functional behavioral assessment for students with disabilities. School Psychology Review, 35, 451-471. Stichter, J. & Conroy, M. (2005). Using structural analysis in natural settings: A responsive functional assessment strategy. Journal of Behavioral Education, 14 (1), 19-34. Stichter, J. P., Sasso, G. M., & Jolivette, K. (2004). Structural analysis and intervention in a school setting: Effects on problem behavior for a student with an emotional/behavioral disorder. 80 Van Camp, C. M., Lerman, D. C., Kelley, M. E., Roane, H. S., Contrucci, S. A., & Vorndran, C. M. (2000). Further analysis of idiosyncratic antecedent influences during the assessment and treatment of problem behavior. Journal ofApplied Behavior Analysis, 33, 207-221. Watson, T. S. & Sterling H. E. (1998). Brief functional analysis and treatment of a vocal tic. Journal ofApplied Behavior Analysis, 3], 471-474. Wilder, D. A., Chen, L., Atwell, J., Pritchard, J., & Weinstein, P. (2006). Brief functional analysis and treatment of tantrums associated with transitions in preschool children. Journal ofApplied Behavior Analysis, 39, 103-107. Zarcone, J. R., Rodgers, T. A., Iwata, B. A., Rourke, D. A., & Dorsey, M. F. (1991). Reliability analysis of the Motivation assessment scale: A failure to replicate. Research in Developmental Disabilities,]2, 349-360.