Deep Reserch - Funcational Behavior Assessment in Public Schools
1/30/2026
Functional Behavior Assessment Tools and Rubrics in U.S. Schools Functional Behavior Assessments (FBAs) are used in U.S. public schools to understand why a stud...
Functional Behavior Assessment Tools and Rubrics in U.S.
Schools Functional Behavior Assessments (FBAs) are used in U.S. public schools to understand why a student exhibits challenging behavior and to guide effective intervention plans.
Decades of research have established FBA as the gold standard for addressing problem behavior ( Are We on Course Yet? Functional Behavior Assessment and Behavior Intervention Plan Technical Adequacy in Schools - PMC ).
In fact, special education law (IDEA) requires FBAs and subsequent Behavior Intervention Plans (BIPs) in certain cases (e.g. when behavior leads to change in placement or is a manifestation of disability) ( Are We on Course Yet? Functional Behavior Assessment and Behavior Intervention Plan Technical Adequacy in Schools - PMC ).
There are three broad categories of FBA methods (FBA categories - Screening and Assessment): indirect assessments, direct descriptive assessments, and experimental functional analyses.
Below is a comprehensive review of evidence-based tools and scoring systems in each category – including how they are used in school settings, scoring criteria, implementation guidelines, and research on their effectiveness.
We also highlight specialized approaches (ACT-informed assessments, Practical Functional Assessment with Skill-Based Treatment, and precursor analyses) that have gained attention in recent years.
Indirect Assessment Tools (Rating Scales and Interviews)
Indirect FBA methods gather information from those who know the student (teachers, parents, the student themselves) via questionnaires or interviews.
These tools are efficient and commonly used in schools to generate hypotheses about behavior function without directly observing the behavior (Indirect Assessments - Screening and Assessment).
They typically yield numerical scores or summaries indicating which potential reinforcers (e.g. attention, escape) might be maintaining the behavior.
Table 1 summarizes key indirect tools that are peer-reviewed and widely used:
Table 1.
Common Indirect FBA Tools Used in Schools
Implementation and Use: Indirect tools like the above are often the first step in a school FBA, gathering input from those who see the behavior daily.
They are valued for being time-efficient and easy to use.
For example, the MAS or QABF can be filled out by multiple teachers to see if they agree on the function of a student’s behavior (MAS).
These tools help pinpoint whether a behavior is likely maintained by peer attention, escape from academic tasks, sensory stimulation, etc.
Based on the results, the team forms initial hypotheses.
However, caution is warranted: because these rely on perceptions and memory, they can be biased or inaccurate (Indirect Assessments - Screening and Assessment).
Research has shown that while indirect assessments often identify a function, they don’t always match the results of direct observation or functional analysis (Questions About Behavior Function - Wikipedia) (Reliability and validity of the functional analysis screening tool - PubMed).
Therefore, best practice is to verify hypotheses from rating scales by collecting observational data.
Direct Descriptive Assessment Methods
Direct descriptive assessments involve observing the student’s behavior in the natural environment (classroom, playground, etc.) and recording what happens before and after the behavior.
Unlike indirect methods, direct assessment does not rely on memory or subjective judgment – it provides objective data on actual events.
Common direct assessment techniques include Antecedent-Behavior-Consequence (A-B-C) recording, scatterplot analysis, and other structured observation frameworks:
ABC Narrative Recording: Teachers or behavior specialists write down what happened immediately before (Antecedent) the behavior, a description of the Behavior itself, and what happened after (Consequence) each time the target behavior occurs.
By reviewing ABC logs, patterns can emerge (e.g. “every time the teacher gives a multi-step instruction, the student bangs the desk and then is sent out of class”) which point to a likely function.
Schools often use ABC data charts with columns for A, B, C to structure this process.
Example: An ABC log might show that during math (antecedent: given difficult worksheet), the student tears the paper (behavior), and the teacher removes the task (consequence), suggesting escape from work is the function.
ABC charts help teams determine exactly what precedes and follows the behavior, giving insight into “why” it keeps happening (Steps for Implementation: Functional Behavior Assessment).
Structured ABC Checklists: A variation of narrative ABC is a checklist or coding system.
Observers might have a predefined list of possible antecedents (e.g. given a demand, denied a toy, transition occurred) and consequences (e.g. teacher attention, peer reaction, escape from task) and mark which ones occur each time the behavior is observed.
After enough observations, the frequency of certain antecedent-consequence combinations can be tallied.
For instance, data might show 80% of incidents follow a correction from the teacher and result in peer laughter – quantitatively supporting an attention function.
This adds a scoring system to ABC observations (counting occurrences) and allows calculation of conditional probabilities (likelihood of the behavior given a specific antecedent or consequence) (A Comparison of Descriptive and Functional Analyses of ...).
Such analysis can strengthen confidence in the hypothesized function, though it remains correlational.
Scatterplot Analysis: A scatterplot is a visual recording matrix used to find temporal patterns in behavior occurrence.
Typically, days or class periods are listed on one axis and time intervals on the other; observers mark intervals when the behavior occurs.
This yields a grid where “clusters” of marks show when the behavior is most frequent (A SCATTER PLOT FOR IDENTIFYING STIMULUS CONTROL OF ...).
In schools, scatterplots help identify when and under what general contexts problem behavior is most likely – for example, a scatterplot might reveal that a student’s outbursts happen mainly in the afternoon during unstructured times.
This can suggest a setting-event (e.g. fatigue) or indicate which classes/activities are high-risk.
Scatterplots can also indirectly point to function; e.g., behavior occurring only during low-attention periods (like independent work time) might be attention-seeking.
They also guide scheduling of interventions (“times of day when an intervention might be implemented to reduce the behavior”) (Steps for Implementation: Functional Behavior Assessment).
It’s important to note that FBA scatterplots are simple behavior charts, not to be confused with statistical scatterplots (Steps for Implementation: Functional Behavior Assessment) – they are used for pattern-finding in behavior assessment.
Continuous Duration/Frequency Recording: Sometimes schools collect data on how often or how long the behavior occurs in various conditions, without systematic conditions as in a lab analysis.
For example, a teacher might note that a child’s tantrums happen 5 times in reading class, 0 times in art, and 3 times in math, indicating reading is a triggering context (possibly an escape function if reading is difficult for the student).
Recording the intensity or duration can also be useful (though intensity is subjective without a clear rubric).
Implementation: Direct observations are often conducted after (or in conjunction with) indirect assessments to empirically verify the presumed triggers and payoffs of the behavior.
A team member (e.g. school psychologist, behavior analyst, or trained teacher) will observe the student across several situations, taking ABC data or scatterplot data.
It is crucial to first have clear operational definitions of the target behavior (what exactly counts as an occurrence) so that data are reliable (Direct Measurements - Screening and Assessment).
Multiple observations over different days increase confidence in the patterns noted.
Scoring and Interpretation: Unlike rating scales, descriptive methods don’t always yield a single numeric “score” for a function.
Instead, teams look at data summaries (like the number of times attention followed the behavior vs. escape consequences) to decide the likely function.
Sometimes they calculate conditional probability: e.g., “When the student is given a task, the probability of problem behavior is 0.8 (very high), and when no task is given it’s near 0.0; moreover, when problem behavior occurs, the probability that it is followed by task removal is 0.9” – such metrics strongly indicate an escape function ([PDF] Linking descriptive assessment to functional analysis and treatment ...).
Research shows descriptive analyses can be informative, but they do have limitations.
They may identify multiple correlations that are hard to interpret, and correlation isn’t causation – for instance, a student might tantrum during difficult tasks and during peer interactions, but only one truly reinforces it.
Studies comparing descriptive results to functional analyses find that they sometimes converge, but not always (A Comparison of Descriptive and Functional Analyses of ...) ([PDF] Linking descriptive assessment to functional analysis and treatment ...).
Thus, many experts treat descriptive assessment as a hypothesis-generation step rather than definitive proof.
Advantages: Descriptive assessments respect the natural context – no contrived experiments – and thus have high social validity in schools.
They can capture setting events and idiosyncratic triggers that structured tests might miss.
Teams can do them while continuing instruction (minimizing disruption).
Challenges: Without systematic manipulation, it’s possible to misidentify the function (e.g., if two triggers co-occur naturally, one might assume the wrong one is causal).
Also, collecting and analyzing ABC data can be time-consuming, and accuracy depends on observer training.
Despite these caveats, direct observation is a critical component of FBA in schools, often required to confirm that the function hypothesized via interviews is supported by real-world evidence (Indirect Assessments - Screening and Assessment).
Experimental Functional Analysis Methods
Experimental Functional Analysis (FA) is the most rigorous approach, in which conditions are systematically manipulated to verify what truly causes the behavior.
Based on the seminal work of Iwata et al. (1982/1994), an FA involves arranging analogue test conditions (e.g. attention, escape, alone, play) and measuring the behavior’s rate or intensity in each ( Functional Analyses and Treatment of Precursor Behavior - PMC ).
By intentionally triggering and reinforcing the behavior in controlled ways, one can prove which consequence is functionally maintaining the behavior (e.g. if the behavior is much higher in the “escape” condition where demands are placed and removed contingent on behavior, and low elsewhere, escape is the confirmed function).
Traditional Multi-Condition FA: In the classic FA procedure ( Functional Analyses and Treatment of Precursor Behavior - PMC ), several conditions are run, each tailored to a specific hypothesized function: for example, in an attention condition, the student is ignored until the problem behavior occurs, then they receive a brief consoling interaction (testing if attention reinforces it); in an escape condition, the student is given difficult tasks and if the problem behavior occurs, the task is removed (testing if escape reinforces it); in a tangible condition, a preferred item is removed and only returned when the behavior happens (testing tangible gain); and in a play (control) condition, the student has access to attention, no demands, and toys (no motivation to engage in problem behavior).
Each condition might last 5–15 minutes, repeated in a rotated sequence.
Scoring is based on the rate/frequency or duration of the target behavior in each condition, often plotted on a graph.
When one condition yields significantly higher behavior than the control, that indicates the function (e.g. behavior only spiked in the attention condition means it’s attention-maintained) ( Functional Analyses and Treatment of Precursor Behavior - PMC ).
This approach has been highly successful and is considered the definitive test of function, with decades of replications in research ( CLASSROOM APPLICATION OF A TRIAL-BASED FUNCTIONAL ANALYSIS - PMC ).
Once a function is identified, an effective intervention can be developed (e.g. teaching a replacement behavior to get the same outcome, and discontinuing reinforcement for the problem behavior) ( CLASSROOM APPLICATION OF A TRIAL-BASED FUNCTIONAL ANALYSIS - PMC ).
Brief Functional Analysis: One challenge in schools is that full FAs can be time- and resource-intensive – they require multiple sessions and careful control, and intentionally provoking problem behavior raises ethical concerns in a school setting ( Functional Analyses and Treatment of Precursor Behavior - PMC ) ( Functional Analyses and Treatment of Precursor Behavior - PMC ).
To address this, researchers developed brief FAs, which use shorter sessions and fewer repetitions.
For instance, Northup et al. (1991) demonstrated an FA with single 5-minute trials of each condition ( CLASSROOM APPLICATION OF A TRIAL-BASED FUNCTIONAL ANALYSIS - PMC ).
Brief FAs can sometimes be done in a single school day.
They do sometimes identify a clear function, but they have a trade-off: due to limited data, about only half of brief FAs yield clear, interpretable results in studies ( CLASSROOM APPLICATION OF A TRIAL-BASED FUNCTIONAL ANALYSIS - PMC ).
If a brief FA is inconclusive, a more extended analysis might still be needed ( Functional Analyses and Treatment of Precursor Behavior - PMC ).
Nonetheless, in practice, a brief FA is a popular school approach when time is short – for example, a school psychologist might run one session each of attention, escape, and control during a student’s recess or after school to see if any condition immediately triggers the behavior.
Trial-Based Functional Analysis (TBFA): The Trial-Based FA is a more recent innovation designed especially for school and other natural settings.
Instead of pulling a student into a clinic or disrupting class for long periods, the TBFA embeds short test trials within ongoing activities ( CLASSROOM APPLICATION OF A TRIAL-BASED FUNCTIONAL ANALYSIS - PMC ).
Each “trial” consists of a brief segment where an establishing situation is introduced (e.g., a demand to test for escape, or diverted attention to test for attention-seeking) and a clear contingency for problem behavior is in place, followed by a control segment.
For example, during class, a teacher might for 1 minute give focused attention to other students and ignore the target student (potential attention deprivation) – if the student’s problem behavior occurs, the trial ends with the teacher providing attention (simulating the reinforcement); if not, it transitions to a control period where the student receives attention regardless.
These trials are interspersed throughout the day.
The outcome is determined by analyzing the percentage of trials in which behavior occurs in each test condition versus control.
Research by Bloom et al. (2011) found TBFA results agreed with traditional FA outcomes in a majority of cases (6 out of 10 with full correspondence, and partial in a 7th) ( CLASSROOM APPLICATION OF A TRIAL-BASED FUNCTIONAL ANALYSIS - PMC ) ( CLASSROOM APPLICATION OF A TRIAL-BASED FUNCTIONAL ANALYSIS - PMC ).
TBFA is thus promising for classrooms, as it can identify function while minimizing disruption.
Its scoring is simply whether the behavior was triggered under specific test conditions and not during controls, across many trials.
If a student consistently behaves during “demand trials” but not in others, one can be confident the function is escape.
TBFAs and similar classroom FAs are increasingly reported in school-based research as viable when standard analyses are impractical ( CLASSROOM APPLICATION OF A TRIAL-BASED FUNCTIONAL ANALYSIS - PMC ) ( CLASSROOM APPLICATION OF A TRIAL-BASED FUNCTIONAL ANALYSIS - PMC ).
Latency-Based FA: Another variant sometimes used for severe behaviors in schools (or clinics) is measuring latency to behavior rather than rate.
In a latency FA, each condition is run but terminated as soon as the problem behavior occurs, and one records the latency (time) until it happened.
If latency is much shorter in one condition (e.g. the student engages in aggression within 10 seconds of a demand vs. 5 minutes in other conditions), that reveals the likely function.
This approach can reduce the total occurrences of dangerous behavior (each session yields at most one occurrence), aligning with school safety priorities.
All the above FA methods involve intentional provocation of the behavior in controlled ways, which can raise concerns in school contexts.
Indeed, ethics and practicality are crucial: We must weigh the insight gained against the risk of reinforcing a problem behavior even briefly ( Functional Analyses and Treatment of Precursor Behavior - PMC ).
Some schools are hesitant or refuse to allow functional analyses for high-risk behaviors due to safety and liability (e.g. intentionally triggering aggression) ( Functional Analyses and Treatment of Precursor Behavior - PMC ).
To navigate this, behavior analysts have developed safer FA strategies, such as:
Protective equipment or safeguards during FAs (used in research for self-injury, etc.), though this can sometimes alter behavior or be infeasible in a classroom ( Functional Analyses and Treatment of Precursor Behavior - PMC ). Conducting FAs in clinical settings outside school, then bringing results to inform the BIP. (This depends on having access to such resources.) Precursor Functional Analysis, described next, which is especially relevant to schools dealing with dangerous behaviors.
Specialized and Contemporary Approaches (ACT, PFA/SBT, Precursor Analysis)
This section highlights some advanced FBA approaches and tools that have emerged from recent research, including those integrating Acceptance and Commitment Therapy (ACT) principles, the Practical Functional Assessment (PFA) approach with its companion Skill-Based Treatment (SBT), and precursor behavior analyses.
These approaches extend or refine traditional FBA methods and are increasingly being considered in school behavior assessment practices.
Acceptance and Commitment Therapy (ACT) – Informed Assessments
Acceptance and Commitment Therapy is a behavioral intervention framework that focuses on psychological flexibility, often used to help individuals (including educators or students) alter how they relate to challenging thoughts and feelings.
While ACT is primarily a therapy approach, its concepts are being integrated into behavior support in schools.
In the context of FBAs, an ACT-informed lens means considering how private events (thoughts, emotions) and verbal rules might influence the occurrence of problem behavior or the implementation of interventions.
Functional Assessment of Private Events: Traditional FBAs largely focus on overt environmental events (e.g. teacher attention as a reinforcer).
ACT emphasizes that internal experiences (e.g. a student’s anxiety or a teacher’s frustration) can also function as antecedents or consequences.
Some behavior analysts conduct conceptual FBAs for these internal processes.
For example, a student’s disruptive behavior might be driven by an internal rule like “If I attempt this difficult task, I’ll feel like a failure” – an ACT-informed assessment would try to identify such covert rules.
The ACT Matrix is a tool sometimes used in counseling that maps behaviors according to whether they move one toward or away from values; school counselors might use a simplified matrix to discuss a student’s actions in terms of internal triggers (fear, doubt) and valued goals.
While not a standard FBA “rubric,” it is a structured way to assess the function of avoidance behaviors (e.g., avoiding academic tasks to escape feelings of inadequacy).
Assessing Teacher Behavior and Implementation Barriers: ACT techniques are also applied to the behavior of teachers and staff who must carry out behavior plans.
A growing body of work (e.g. by Tarbox, Szabo, & Aclan, 2020) suggests using functional assessment to understand why a teacher might unintentionally reinforce problem behavior or avoid implementing an intervention – often, some form of avoidance of psychological discomfort is involved ( Acceptance and Commitment Training Within the Scope of Practice of Applied Behavior Analysis - PMC ) ( Acceptance and Commitment Training Within the Scope of Practice of Applied Behavior Analysis - PMC ).
For instance, through an ACT-informed interview, a behavior coach might discover that a teacher is hesitant to ignore a student’s tantrums (as the BIP requires) because the teacher is thinking “I can’t stand to see the child upset; the class will think I’m mean.” That covert verbalization (“I just can’t take him being upset”) is essentially the teacher’s behavior (giving in to the tantrum) being reinforced by the removal of her own discomfort ( Acceptance and Commitment Training Within the Scope of Practice of Applied Behavior Analysis - PMC ) ( Acceptance and Commitment Training Within the Scope of Practice of Applied Behavior Analysis - PMC ).
To capture this, practitioners may use informal questionnaires or guided interviews with staff that align with the six ACT processes (e.g. asking the teacher what thoughts arise when the student tantrums, to identify potential cognitive fusion or experiential avoidance).
The “functional assessment” here maps the teacher’s patterns onto ACT constructs, identifying which ACT domains (acceptance, defusion, values, etc.) are relevant ( Acceptance and Commitment Training Within the Scope of Practice of Applied Behavior Analysis - PMC ) ( Acceptance and Commitment Training Within the Scope of Practice of Applied Behavior Analysis - PMC ).
Scoring/Tracking ACT Processes: There are some standardized ACT measures (like the Avoidance and Fusion Questionnaire for Youth, or the teacher-focused Parental Acceptance and Action Questionnaire adapted for teachers) that could be used to get numerical scores on things like psychological inflexibility.
However, in an FBA context these are supplementary.
The main “data” in ACT-informed assessment are qualitative insights: e.g., the presence of a strong avoidance motive suggests interventions should include ACT strategies (like training the teacher in mindfulness to tolerate the student’s temporary distress).
Research and Effectiveness: The integration of ACT in school behavior support is relatively new, but promising.
It recognizes that sometimes a behavior’s “function” might be to avoid internal aversives (anxiety, guilt), not just external ones.
According to ACT proponents, understanding these private-event functions can lead to more durable solutions, because you can concurrently address the internal context (for instance, teaching a student to accept feelings of frustration rather than escape tasks whenever frustration hits).
While not a standalone FBA tool, ACT-based analysis complements traditional FBA by broadening the scope of assessment to include covert factors.
Early case studies indicate that when teachers receive ACT-based coaching (increasing their psychological flexibility), their fidelity in implementing FBAs/BIPs improves and student outcomes follow ( Acceptance and Commitment Training Within the Scope of Practice of Applied Behavior Analysis - PMC ) ( Acceptance and Commitment Training Within the Scope of Practice of Applied Behavior Analysis - PMC ).
This is a developing area bridging school psychology and behavior analysis.
Practical Functional Assessment (PFA) and Skill-Based Treatment (SBT)
The Practical Functional Assessment (PFA) is a modern, streamlined approach to functional analysis introduced by Gregory Hanley and colleagues ( Minimizing Escalation by Treating Dangerous Problem Behavior Within an Enhanced Choice Model - PMC ).
It is sometimes called the Interview-Informed Synthesized Contingency Analysis (IISCA).
The goal of PFA is to yield a clear understanding of a child’s behavior function rapidly and safely, even for severe behaviors, and immediately use that information to design an effective intervention (the Skill-Based Treatment, SBT).
Key components of PFA:
Open-Ended Interview: The process starts with an in-depth interview with caregivers/teachers (and sometimes the student) to gather rich information about when the behavior occurs, what might set it off, and what the child typically achieves by it.
Unlike a fixed questionnaire, this interview is conversational and aimed at identifying all relevant motivating events.
Often, multiple triggers or reinforcers surface (e.g. a child’s tantrum may occur when an academic demand is placed and attention is diverted, and the child’s goal is not only to escape the work but also to regain one-on-one attention).
This leads to a synthesized hypothesis (e.g. “Johnny’s aggression is reinforced by escaping difficult tasks combined with getting adult comfort”).
Synthesized Contingency Analysis (IISCA): Based on the interview, the assessor designs a test condition that simultaneously incorporates the suspected combination of establishing events and reinforcers, and a control condition where those are absent or freely available.
For instance, in the test, the assessor might present a moderately challenging task while also attending to other students (establishing motivation for escape and attention), and if the problem behavior occurs, immediately both remove the task and comfort the child (delivering the synthesized reinforcement).
In the control, the child gets to play, with easy tasks and continuous attention, and problem behavior produces no additional outcome.
This paired test/control format often yields very clear results quickly: if the child’s problem behavior is truly driven by that mix of factors, it will appear in the test condition and be absent in control.
Studies have found that behavior differentiates strongly in IISCAs, often within just a few sessions ( Minimizing Escalation by Treating Dangerous Problem Behavior Within an Enhanced Choice Model - PMC ) ( Minimizing Escalation by Treating Dangerous Problem Behavior Within an Enhanced Choice Model - PMC ).
Scoring is typically visual/qualitative – the assessor notes if problem behavior occurs (and how quickly) in the test vs control.
Sometimes latency to first occurrence is used as the metric to keep sessions brief and safe.
Skill-Based Treatment (SBT): A crucial aspect of PFA is immediately linking the assessment to treatment.
Based on the function identified, an individualized treatment is crafted that teaches the child alternative skills to get the same needs met safely.
Hanley’s SBT protocol usually includes teaching a simple Functional Communication Response (FCR) (e.g. asking for a break or for attention appropriately), a tolerance response (coping with being told “wait” or “no” briefly), and then reinforcing contextually appropriate behaviors (CAB) to rebuild the child’s participation in routine activities.
The treatment is done in a controlled setting at first and then generalized.
Notably, PFA/SBT emphasizes avoiding escalation – the idea is to teach the child that their appropriate behaviors will be honored before they feel the need to resort to the problem behavior.
The success of SBT is measured by the elimination (or near elimination) of the problem behavior and the child’s successful use of new skills.
Why PFA is notable for schools: It addresses common issues with traditional FA – time constraints and safety.
PFA sessions can be very short because the analysis is so targeted (often just one test condition and one control alternated a few times).
It also reduces dangerous behavior by possibly using precursors or shorter latencies (see below) and by quickly moving into treatment.
The approach has been used with children with autism and other disabilities, often in clinic or specialized school settings, with many replications showing meaningful improvements in severe behaviors ( Minimizing Escalation by Treating Dangerous Problem Behavior Within an Enhanced Choice Model - PMC ) ( Minimizing Escalation by Treating Dangerous Problem Behavior Within an Enhanced Choice Model - PMC ).
For example, Hanley et al. (2014) reported completely eliminating severe problem behavior in multiple children via PFA-informed treatment ( Minimizing Escalation by Treating Dangerous Problem Behavior Within an Enhanced Choice Model - PMC ) ( Minimizing Escalation by Treating Dangerous Problem Behavior Within an Enhanced Choice Model - PMC ).
Subsequent studies across settings (homes, schools) have found similarly strong outcomes ( Minimizing Escalation by Treating Dangerous Problem Behavior Within an Enhanced Choice Model - PMC ).
This evidence base has made PFA/SBT a highly regarded evidence-based practice in behavior analysis.
Implementation in Schools: Some public school districts have started training their behavior specialists in the PFA/SBT model, especially for students whose behaviors have not been successfully understood or treated with typical FBAs and BIPs.
The open-ended interview can be done with teachers and parents to capture school-specific contexts.
The analysis might be done in a controlled environment (like an empty classroom or therapy room) for safety, possibly using trained staff.
The analysis does involve briefly giving the problem behavior what it wants (contrary to typical school practice of not reinforcing misbehavior), but because it’s so short and immediately followed by teaching, administrators often allow it when they see the potential benefit.
The scoring of the assessment is simply whether problem behavior occurs in the test condition.
PFA does not produce a numeric “score” per se – it’s a functional demonstration.
Some practitioners will record the rate or percentage of 10-second intervals with problem behavior in test vs control to have objective data.
A clear spike in the test condition indicates the hypothesis was correct.
After that, success is measured by the treatment fidelity and outcomes (often graphed as reduction in behavior over time).
Effectiveness and Research Validation: PFA and SBT have a growing body of research.
Hanley’s 2014 study ( Minimizing Escalation by Treating Dangerous Problem Behavior Within an Enhanced Choice Model - PMC ) was a seminal paper, and since then numerous studies (e.g.
Jessel et al. 2018; Beaulieu et al. 2018; Rajaraman et al. 2021) have replicated the process with different therapists and participants ( Minimizing Escalation by Treating Dangerous Problem Behavior Within an Enhanced Choice Model - PMC ).
They report high rates of socially significant success – children not only reduce problem behavior but gain communication, compliance, and tolerance skills.
One study referred to the outcomes as “socially meaningful resolution of many different types of dangerous problem behavior” being probable when using PFA/SBT ( Minimizing Escalation by Treating Dangerous Problem Behavior Within an Enhanced Choice Model - PMC ).
This is strong language, indicating how effective the approach has proven when done correctly.
That said, researchers also note it may not be feasible in all settings due to staff training needs or certain risks ( Minimizing Escalation by Treating Dangerous Problem Behavior Within an Enhanced Choice Model - PMC ) ( Minimizing Escalation by Treating Dangerous Problem Behavior Within an Enhanced Choice Model - PMC ).
Recent work has focused on enhancing practicality and safety, such as the Enhanced Choice Model where the child is given the choice to participate in treatment or take breaks, thus eliminating the need for any physical management ( Minimizing Escalation by Treating Dangerous Problem Behavior Within an Enhanced Choice Model - PMC ) ( Minimizing Escalation by Treating Dangerous Problem Behavior Within an Enhanced Choice Model - PMC ).
Overall, PFA/SBT represents a cutting-edge, evidence-backed system that integrates assessment and intervention – aligning well with school needs to both determine function and immediately improve behavior.
Precursor Behavior Analysis
When dealing with dangerous behaviors (like serious aggression or self-injury), a major concern in schools is how to assess the function without triggering harm.
A solution found in research is to identify a precursor behavior – a smaller, safer behavior that reliably occurs right before the dangerous behavior – and then treat that precursor as the target for analysis.
The logic is that if the precursor and the severe behavior are part of the same response class (meaning they serve the same function and the precursor is essentially an early signal), then analyzing the precursor yields the same functional information as analyzing the severe behavior ( Functional Analyses and Treatment of Precursor Behavior - PMC ) ( Functional Analyses and Treatment of Precursor Behavior - PMC ).
Identifying Precursors: School staff or parents are first asked to describe what behaviors tend to occur just before the big meltdown or injury.
For example, a student may start by clenching fists, whining, or making a specific facial expression before escalating to hitting.
Observers confirm that these precursor behaviors do in fact consistently precede the dangerous behavior in time ( Functional Analyses and Treatment of Precursor Behavior - PMC ) ( Functional Analyses and Treatment of Precursor Behavior - PMC ).
Once a reliable precursor (or set of precursors) is identified, the team can proceed with a functional analysis or intervention focusing on the precursor, thereby inferring the function of the more severe behavior indirectly ( Functional Analyses and Treatment of Precursor Behavior - PMC ) ( Functional Analyses and Treatment of Precursor Behavior - PMC ).
Precursor Functional Analysis: This is conducted just like a normal FA or a PFA, but the endpoint criterion is the precursor.
For instance, instead of waiting for a student to throw a chair to end the demand condition, the assessor might end it (and provide the reinforcer) as soon as the student begins to yell or threaten – a safer act.
The data are then whether the precursor behavior occurs more in certain test conditions.
If, say, the student’s yelling precursor only happens during attention-withdrawal tests and not during escape tests, it strongly suggests the severe aggression is attention-maintained too.
Research by Smith and Churchill (2002) demonstrated that functional analyses of precursors identified the same maintaining contingencies as analyses of the severe behaviors, in their participants ( Functional Analyses and Treatment of Precursor Behavior - PMC ) ( Functional Analyses and Treatment of Precursor Behavior - PMC ).
In other words, the precursors were valid proxies for the real thing.
Effectiveness: A review of over a dozen studies confirms that analyzing precursors is a viable and much safer strategy – in case after case, the precursor and problem behavior were sensitive to the same reinforcers ( Minimizing Escalation by Treating Dangerous Problem Behavior Within an Enhanced Choice Model - PMC ).
For example, Fritz et al. (2013) and others found that treating the precursor (through functional communication training, etc.) led to the elimination of the severe behavior as well ( Minimizing Escalation by Treating Dangerous Problem Behavior Within an Enhanced Choice Model - PMC ) ( Functional Analyses and Treatment of Precursor Behavior - PMC ).
From a practical standpoint, schools appreciate this approach because it reduces the need to ever let a dangerous behavior get going during assessment.
It’s essentially a de-risked experimental analysis.
Use in Schools: A precursor assessment might be done as part of the FBA when direct observation notes the build-up signs.
Some structured tools help here; for instance, a teacher could use a checklist to mark what precursor signs they see and how often those precede the crisis.
Once confirmed, the team writes the FBA summary like: “When Kevin starts tapping his face rapidly (precursor), it indicates he is becoming distressed by academic demands, and if the demand continues, he will likely engage in self-injury.
Removing the demand when tapping begins has been observed to prevent escalation.” The intervention can then be proactive (teach Kevin to request a break at the tapping stage, etc.).
Precursor analysis aligns with the idea of being proactive and preventive, a common theme in school-based behavior support.
Research Validation: This approach is well-documented in literature ( Minimizing Escalation by Treating Dangerous Problem Behavior Within an Enhanced Choice Model - PMC ).
One study even referred to it as treating severe behaviors through an “indirect method” of assessing precursors ( Functional Analyses and Treatment of Precursor Behavior - PMC ).
By 2008, researchers like Ellsworth et al. had successfully applied precursor FAs and then verified that interventions based on precursor function stopped the precursors and prevented the severe behaviors ( Functional Analyses and Treatment of Precursor Behavior - PMC ) ( Functional Analyses and Treatment of Precursor Behavior - PMC ).
Schools implementing PBIS (Positive Behavioral Interventions and Supports) frameworks appreciate such evidence, as it fits the ethic of “minimize risk, maximize information.” It is worth noting that identifying a good precursor is not always possible – not all students show clear warning behaviors.
But when they do, it can be a lifesaver (sometimes literally).
Quality Evaluation Rubrics for FBA/BIP Documents
In addition to the assessment tools for determining behavior function, U.S. schools often use rubrics to evaluate the quality of the FBA process and the resulting Behavior Intervention Plan.
These rubrics are not functional assessment tools themselves, but rather scoring systems to ensure the FBA was thorough and the plan is technically sound.
They are typically employed by districts or researchers for training and auditing purposes:
FBA/BIP Technical Adequacy Evaluation (TATE): The TATE is a research-developed rubric that checks whether an FBA and BIP contain all essential components (clear operational definitions, data from multiple sources, hypothesis statements, function-linked interventions, etc.).
Each component is rated (present/absent or on a scale), yielding an overall fidelity score.
In one study examining 135 school FBAs and 129 BIPs, the TATE showed good psychometric properties – moderate internal consistency, excellent inter-rater reliability, and content validity as judged by experts ( Are We on Course Yet? Functional Behavior Assessment and Behavior Intervention Plan Technical Adequacy in Schools - PMC ) ( Are We on Course Yet? Functional Behavior Assessment and Behavior Intervention Plan Technical Adequacy in Schools - PMC ).
They found most school FBAs/BIPs only met about 40–50% of the key components, underscoring the need for such rubrics ( Are We on Course Yet? Functional Behavior Assessment and Behavior Intervention Plan Technical Adequacy in Schools - PMC ) ( Are We on Course Yet? Functional Behavior Assessment and Behavior Intervention Plan Technical Adequacy in Schools - PMC ).
Using a rubric like TATE can guide staff to improve the completeness and effectiveness of their FBA reports.
State/District Rubrics: Many states produce their own FBA/BIP guidelines.
For example, Tennessee’s Department of Education has an “FBA Self-Assessment Rubric” (often used by IEP teams) which rates compliance and quality on items like team composition, clarity of behavior definition, use of data, hypothesis accuracy, and linkage to the BIP.
It provides scores of 0, 2, or 4 on each criterion (0 = not met, 2 = meets basic compliance, 4 = high quality) () ().
Such rubrics help ensure legal compliance (e.g., parent consent obtained, appropriate team members involved) and best-practice (e.g., multiple sources of data, at least one direct observation included) () ().
While these don’t directly identify a behavior’s function, they improve the consistency and credibility of the FBA process in schools.
Behavior Support Plan Quality Evaluations: After the FBA, the BIP’s quality is critical.
California’s PENT (Positive Environments, Network of Trainers) developed the “Essential 10” BIP rubric, which scores key elements of a behavior plan (like quality of behavior definition, functional hypothesis, prevention strategies, teaching of replacement behavior, etc.) (Essential 10 Scoring Rubric - Behavior Intervention (PENT)) (Essential 10 Scoring Rubric - Behavior Intervention (PENT)).
It explicitly notes that it “does not measure the accuracy of the function” (Essential 10 Scoring Rubric - Behavior Intervention (PENT)) – that must come from a good FBA – but it ensures the plan logically addresses the function identified.
Each component is rated 0, 1, or 2, with contaminating factors (like blaming the student’s character) resulting in a zero (Essential 10 Scoring Rubric - Behavior Intervention (PENT)) (Essential 10 Scoring Rubric - Behavior Intervention (PENT)).
Use of such rubrics in districts has been linked to stronger BIP implementation and better student outcomes, as weak plans can be revised before implementation.
In summary, these rubrics and scoring guides serve as quality control tools.
They reflect evidence-based indicators of a sound FBA/BIP.
For practitioners, they are checklists to self-evaluate their work; for researchers, they provide a way to quantify and compare FBA/BIP quality across schools.
A comprehensive FBA in a school setting ideally would not only utilize the best assessment tools (indirect, direct, experimental as needed) but also be written up and acted upon with fidelity – the rubrics help make sure “no step is missed” and the plan is truly function-based ( Are We on Course Yet? Functional Behavior Assessment and Behavior Intervention Plan Technical Adequacy in Schools - PMC ) ( Are We on Course Yet? Functional Behavior Assessment and Behavior Intervention Plan Technical Adequacy in Schools - PMC ). Conclusion FBAs in U.S. public schools employ a variety of rubrics and tools – from quick checklists and rating scales to detailed observations and analyses – each contributing unique information about why a student behaves in a certain way.
Indirect assessments (like the QABF, MAS, FAST, PBQ, etc.) offer structured ways to gather initial hypotheses with scoring systems that highlight likely functions.
Direct descriptive methods (ABC charts, scatterplots, etc.) provide objective, situational data that can confirm and clarify those hypotheses.
When certainty is needed, experimental analyses (whether a full functional analysis, a brief or trial-based version, or a PFA) can definitively reveal function by manipulating variables in a controlled manner.
Emerging practices like ACT-informed assessment broaden the perspective to internal events, ensuring interventions address not just external contingencies but also private events influencing behavior.
The PFA/SBT model shows how assessment and intervention can be tightly integrated to produce life-changing outcomes efficiently, and precursor analyses demonstrate ingenuity in balancing rigor with safety.
Underpinning all these methods is a strong research base: functional assessment leads to more effective interventions ( Are We on Course Yet? Functional Behavior Assessment and Behavior Intervention Plan Technical Adequacy in Schools - PMC ) ( Are We on Course Yet? Functional Behavior Assessment and Behavior Intervention Plan Technical Adequacy in Schools - PMC ), and tools that have been peer-reviewed (with data on reliability and validity) should be favored.
It’s also evident that no single tool is perfect – hence best practice often involves a multi-method FBA, using an indirect tool plus direct observation, and, if needed, an experimental test to verify.
Schools are increasingly equipped with not just the tools to do FBAs, but also rubrics to evaluate the quality of those FBAs and ensuing BIPs, ensuring that the assessments are thorough and the intervention plans truly align with assessment findings ( Are We on Course Yet? Functional Behavior Assessment and Behavior Intervention Plan Technical Adequacy in Schools - PMC ) ( Are We on Course Yet? Functional Behavior Assessment and Behavior Intervention Plan Technical Adequacy in Schools - PMC ).
In practical terms, a school psychologist or behavior specialist conducting an FBA will often proceed as follows: gather initial information via teacher interviews and rating scales (indirect), observe the student in problem and non-problem contexts (direct), perhaps run a brief experimental analysis or a PFA if the function is still unclear, and then formulate a summary statement (e.g., “Function = escape from difficult tasks via adult attention”) supported by the collected data.
This summary guides the BIP, which is then evaluated against quality criteria or rubrics before finalizing.
By adhering to evidence-based tools and scoring systems at each step, schools maximize the chances that their intervention will be effective – leading to better outcomes for students (reduced problem behaviors, improved skills) and a safer, more positive learning environment for all.
References: Bloom, S.
E., Iwata, B.
A., Fritz, J.
N., Roscoe, E.
M., & Carreau, A. (2011).
Classroom application of a trial‐based functional analysis.
Journal of Applied Behavior Analysis, 44(1), 19–31. doi:10.1901/jaba.2011.44-19 citeturn0search9 Cook, C.
R., et al. (2012).
Technical adequacy of school FBAs and BIPs: Development of the FBA/BIP Technical Adequacy Evaluation Tool (TATE).
Journal of Applied Behavior Analysis, 45(3), 463–478. Durand, V.
M., & Crimmins, D. (1988).
Motivation Assessment Scale (MAS) and its applications.
Journal of Developmental and Physical Disabilities, 2(4), 170–175. Ellingson, S.
A., Miltenberger, R.
G., et al. (2000).
Treatment of automatically reinforced self-injurious behavior using a latency-based functional analysis.
Behavior Modification, 24(1), 31–52. Hanley, G.
P., Jin, C.
S., Vanselow, N.
R., & Hanratty, L.
A. (2014).
Functional assessment and treatment of problem behavior via synthesized contingency analyses.
Journal of Applied Behavior Analysis, 47(1), 16–36. Iwata, B.
A., Dorsey, M.
F., Slifer, K.
J., Bauman, K.
E., & Richman, G.
S. (1994).
Toward a functional analysis of self-injury.
Journal of Applied Behavior Analysis, 27(2), 197–209. (Original work published 1982) Iwata, B.
A., DeLeon, I.
G., Roscoe, E.
M., et al. (2013).
Reliability and validity of the functional analysis screening tool (FAST).
Journal of Applied Behavior Analysis, 46(2), 345–355. Lewis, T.
J., Scott, T.
M., & Sugai, G. (1994).
The problem behavior questionnaire: A teacher-based instrument to develop functional hypotheses of problem behavior.
Journal of Positive Behavior Interventions, 4(3), 166–176. Matson, J.
L., & Vollmer, T.
R. (1995).
Questions About Behavioral Function (QABF) scale development and validity.
Research in Developmental Disabilities, 16(3), 57–67. PENT. (2021).
Essential 10 scoring rubric: Essential components of behavior intervention plans.
Retrieved from https://pent.ca.gov Smith, R.
G., & Churchill, R. (2002).
Identification of precursors to severe problem behavior and analysis of their functions.
Journal of Developmental and Physical Disabilities, 14(4), 315–327. Tarbox, J., Wallace, M.
D., & Mace, F.
C. (2020).
Acceptance and commitment training within the scope of practice of applied behavior analysis: Functional assessment of private events and implementation issues.
Behavior Analysis in Practice, 13(1), 50–61.
Edited by Rob Spain, M.S., BCBA, IBA
Want More Evidence-Based ABA Strategies?
Subscribe to our weekly newsletter for practical tips, research updates, and free resources for school-based BCBAs.