The DEMOGRAPHIC FORM consists of nine short sections, each with checkboxes or blanks to fill in for sex, age, ethnicity, school employee years of experience and grade level(s) currently teaching (if applicable), primary school role, certification status, highest degree earned, classroom management course history, and professional development history with regard to academic and behavior screenings. This form is completed by all participating faculty and staff at each school site so that those designing, implementing, and evaluating the Ci3T model of prevention can be clearly described. This allows others who are interested in adopting a Ci3T model to make comparisons between their own faculty and staff and those of implementing schools.
Schoolwide Expectations Survey for Specific Settings (SESSS)SESSS TRANSCRIPT
The SCHOOLWIDE EXPECTATIONS SURVEY FOR SPECIFIC SETTINGS (SESSS; Lane, Oakes, & Menzies, 2010) allows school-based faculty and staff to identify behaviors that are critical for student success at their particular school. The survey is completed by all faculty and staff working at and/or supporting the school site. The survey is designed to gather information on three broad categories: respect, responsible, and best effort for seven school settings: classroom, hallway, cafeteria, playground, restroom, bus, and arrival/dismissal. The expectations were selected based on the most prevalent expectations of schools implementing positive behavioral interventions and supports (Lynass, Tsai, Richmond, Cheney, 2012). Each behavior is rated on importance using a 3-point Likert-type scale where 0 = not important for success in this setting, 1 = important for success in this setting, and 2 = critical for success in this setting. Scores for each item are aggregated and reported as the number and percentage of faculty and staff who scored the behavior as critical for success. The Ci3T school leadership team then uses the SESSS data to construct the first draft of the schoolwide expectation matrix so that the draft reflects the behaviors important to the majority of the school (see additional detail in Lane, Oakes, Jenkins, Menzies, & Kalberg, 2014).
SESSS (8.5″ x 14″ single page legal size PDF to print).
SESSS Large Format (8.5″ x 11″ multi-page letter size PDF to print).
SESSS Report Template (aggregate data reported back to school site; 8.5″ x 14″ multi-page legal size Excel file to print). SESSS Report Sample.
Knowledge, Confidence, and Use (KCU) Survey
The KNOWLEDGE, CONFIDENCE AND USE (KCU) SURVEY is a multi-item survey for educators to rate their knowledge about, confidence in using, and usefulness of key constructs of the Ci3T model of prevention, and a brief assessment of their actual knowledge of these concepts. Self-reported KCU of items are rated using a 4-point Likert-type scale ranging from 0 (I have no KNOWLEDGE of this concept or strategy; I am not CONFIDENT in my ability to use or implement this concept or strategy; This concept or strategy is neither USEFUL nor relevant to my teaching) to 3 (I have a substantial amount of KNOWLEDGEabout this concept or strategy; I am very CONFIDENT in my ability to use or implement this concept or strategy; This concept or strategy is very USEFUL and/or relevant to my teaching), with higher numbers reflecting more knowledge, confidence, and perceived usefulness of the construct. A set of open-ended items are completed and scored as an assessment of actual knowledge. Educators complete the survey prior to engaging in a professional learning series, such as a full year training series to design a Ci3T model of prevention, after the completion of the professional learning series, and then once the new practice has been implemented (about three months after the start of implementation) to look for changes in KCU of constructs between designing and implementing the new practice. Pre, post, and follow up mean scores are computed for comparison. There are versions of the KCU for the Ci3T prevention model; low intensity strategies, classroom management, and instructional delivery practices (included in Lane, Menzies, Bruhn, & Crnobori, 2011); and for one systematic model of functional assessment-based interventions (Umbreit, Ferro, Liaupsin, & Lane, 2007).
Ci3T Version 1: Write-in questions
Ci3T KCU Write-in PDF to print.
Ci3T KCU Write-in Report Template MS-Word document to edit and print.
Ci3T Version 2: Multiple-choice questions
Ci3T KCU Multiple Choice PDF to print.
Ci3T KCU Multiple Choice Report Template MS-Word document to edit and print.
Ci3T KCU_Multiple_Choice_Qualtrics_Survey_Template.qsf KCU_Multiple_Choice_Qualtrics_Report_Template.qrf
FABI KCU Version 1: Write-in questions
KCU FABI Write-in PDF to print.
FABI KCU Version 2: Multiple-choice questions
KCU FABI Multiple Choice PDF to print.
Primary Intervention Rating Scale (PIRS): Pre-Implementation (training year)
The PRIMARY INTERVENTION RATING SCALE (PIRS) by Lane, Robertson, and Wehby (2002; Lane, Kalberg, Bruhn, Driscoll, Wehby, & Elliott, 2009) is an adapted version of the Intervention Rating Profile-15 (Witt & Elliott, 1985), a widely used measure of social validity. Social validity refers to stakeholders’ views on the social significance of the intervention goals, acceptability of the intervention procedures, and the social importance of the intervention outcomes (Wolf, 1978). During the training year when a school’s Ci3T leadership team is designing their Ci3T prevention plan, school faculty and staff are encouraged to provide feedback on the first full draft of the primary plan using the PIRS. First, the team shares the draft primary prevention plan and answers any questions posed by faculty and staff. Then, those who had the opportunity to learn about the primary plan provide feedback. PIRS results are aggregated (i.e., average item and overall scores) and comments typed up to protect confidentiality before sharing back with the team. The Ci3T team reviews the results and responds with revisions to the plan (as appropriate) and information back to the faculty and staff on decisions made based on feedback. That is, some suggestions do not result in changes based on various reasons (e.g., feasibility, resources, or the science supporting certain aspects of the plan). In some instances, the narrative feedback provides direction on where additional professional learning is needed (e.g., understanding the difference between bribery and reinforcement; how to use tickets to reinforce behaviors that facilitate the instructional processes).
To complete, each individual from the school site who has reviewed the Ci3T primary prevention plan (or other primary prevention plans) rates his or her opinion on 17 items using a 6-point Likert-type scale ranging from (1) strongly disagree to (6) strongly agree, and then completes four open-ended questions regarding suggestions for changes to the primary prevention plan, perceptions of student performance as a result of implementing the primary prevention plan, and what is felt to be the least and most beneficial components of the primary prevention plan. The PIRS take approximately 10 minutes to complete. Please see Lane, Oakes, and Magill (2014) in the volume 58, issue 3, 2014 special issue of Preventing School Failure, Designing, Implementing, and Evaluating Comprehensive, Integrated, Three-Tiered Models of Prevention: A Step-by-Step Guide for additional details and a sample of the school level report of results.
PIRS Pre-Implementation PDF to print.
PIRS Pre-Implementation Report Template MS-Word document to edit and print.
Ci3T MODEL OF PREVENTION: FEEDBACK FORMCi3T Feedback Form TRANSCRIPT
Adapted from the Primary Prevention Plan: Feedback Form (Lane, 2002), the Ci3T Model of Prevention: Feedback Form allows teams to gather additional feedback from school stakeholders on the full draft of the Ci3T prevention plan. The feedback form is the second formal step in soliciting stakeholder feedback (assessing social validity) of the draft Ci3T plan (also referred to as Ci3T blueprint). The feedback form is used at the end of the training series (completed after Session 5 and results returned at Session 6) and allows stakeholders a voice in final revisions made during Session 6 as they prepare to implement during the final year. It is completed by stakeholders who have reviewed the revised plan to provide feedback to the school’s Ci3T leadership team. The team then uses stakeholders’ feedback for a final round of revisions to the plan, which will then be implemented in the following academic year. Similar to scoring the PIRS, the five-point, Likert-type items are averaged (means and standard deviations reported) and the four open ended item comments are typed and combined into one report. The team assesses the level of agreement with the plan and themes of comments to determine additional professional learning needs and final refinement to the Ci3T plan before implementation. Please see Lane, Oakes, Jenkins, Menzies, & Kalberg (2014) for additional detail.
Ci3T Feedback Form PDF to print.
Ci3T Feedback Form Report Template MS-Word document to edit and print.
PROFESSIONAL LEARNING SERIES SESSION EVALUATIONS
Ci3T school leadership teams complete PROFESSIONAL LEARNING SERIES SESSION EVALUATIONS after they attend each of the six Ci3T sessions throughout the training year. The feedback is used to improve content delivery and determine if the sessions are meeting the needs and expectations of each school’s leadership team to be able to successfully design a Ci3T model of prevention. Professional learning series session attendees rate seven items (e.g., The format of the training kept me engaged, The objectives of the training session were addressed) on a 5-point Likert-type scale ranging from 1 = unsuccessful to 5 = successful. Space for additional comments, recommendations, and commendations is provided.
School-wide Evaluation Tool (SET)
The SCHOOL-WIDE EVALUATION TOOL (SET; Todd et al., 2012) version 2.0 is a treatment integrity tool used to evaluate implementation progress of the positive behavior interventions and supports (PBIS) component of a school’s Ci3T plan. It contains 28 items to assess the seven critical features of PBIS: (a) school-wide behavioral expectations are defined, (b) school-wide expectations are taught explicitly to all students, (c) reinforcement is provided for meeting school-wide behavioral expectations, (d) a continuum of consequences for responding to problem behavior is implemented consistently, (e) problem behavior patterns are monitored and resulting data are used to inform ongoing decision making, (f) an administrator is involved with and supports the school-wide PBIS framework, and (g) the school district offers assistance as evidenced by functional policies, staff training opportunities, and data collection options. Multiple sources of information are gathered including permanent products (e.g., school improvement plan goals, school action plan, office discipline referral form), observations, staff interviews, and student interviews. SET items are scored using specific criteria for each item, where 0, 1, or 2 points are earned based on how fully the evaluation question was satisfied. For example, “Are the agreed upon rules & expectations publicly posted in 8 of 10 locations?” is worth 0 points if expectation matrices were observed in 0-4 locations, 1 point for 5-7, and 2 points for 8-10 locations. Percentage scores for each subscale and an overall score are calculated. In our work, the SET is typically completed twice during the academic year, once in fall and once in spring. Results are used by schools to determine annual goals, design and revise procedures as needed, compare efforts from year to year, and more.
The SET and the SET Muanual are available at http://www.pbis.org/evaluation/evaluation-tools.
Treatment Integrity TRANSCRIPT
The Ci3T Treatment Integrity: Teacher Self-Report (Ci3T TI: TSR; Lane, 2009a) and Ci3T Treatment Integrity: Direct Observation Tool (Ci3T TI: DO; Lane, 2009b) are used to measure the level of implementation of the school’s Ci3T plan core components.
Ci3T Treatment Integrity: Teacher Self-Report (Ci3T TI: TSR)
The Ci3T TI: TSR is a 38-item checklist assessing educator use of Ci3T practices. There are three subscales: Procedures for Teaching (16 items; e.g., Did I use clear routines for classroom procedures?), Procedures for Reinforcing (10 items; e.g., Did I use behavior-specific praise when giving tickets to students?), and Procedures for Monitoring (12 items; e.g., Did I use behavior and academic data together [in conjunction with each other] to inform my instruction?; Lane, K. L., 2009). Teachers rate their use of each item using a 5-point Likert-type scale (not at all = 0, some of the time = 1, most of the time = 2; all of the time = 3; no opportunity = 7). Scores are computed by dividing total score by total possible (adjusted for missing items) and multiplying by 100; school percentages by calculating the mean score for everyone who completed the Ci3T TI: TSR. This allows the school Ci3T leadership team to plan for summer revisions, professional learning opportunities, and coaching supports. Additionally, teams can monitor integrity of the core components over time and in conjunction with student data. For example, if there are increases in behavioral risk on screening measures or office discipline referrals, teams would examine the integrity data to ensure students have access to the Ci3T primary plan before assuming non-responsiveness. That is, consider if the Ci3T plan is being implemented at sufficient integrity to produce the desired results for students. We recommend treatment integrity assessments be conducted at least twice per year, once during fall semester (at least 6-8 weeks after school began) and once during spring semester. An initial evaluation of the Ci3T TI: TSR indicated adequate to desirable reliability (see Bruhn, 2011).
Ci3T Treatment Integrity: Direct Observation (Ci3T TI: DO)
The Ci3T TI: DO (Lane, 2009b) is a subset of items from the Ci3T TI: TSR (Lane, 2009a). It includes 13 procedures for teaching and eight procedures for reinforcing. Trained observers enter a classroom for 30 minutes and score each item using a 5-point Likert-type scale, indicating the level of implementation observed during that session for each item (not at all = 0, some of the time = 1, most of the time = 2; all of the time = 3; no opportunity = 7). After the observation time and at a natural break in instruction, the teacher completes the same direct observation tool and scores his or her perception of implementation during the same 30 minute time period. Integrity is computed by dividing total score by total possible (adjusted for missing items) and multiplying by 100, with the goal of examining the extent to which the observer and teacher converge and diverge in their views of implementing the Ci3T primary plan. Results can be used to celebrate strengths and to inform professional development offerings to support particular components of the plan (Lane, Oakes, & Magill, 2014).
Primary Intervention Rating Scale (PIRS)
The PRIMARY INTERVENTION RATING SCALE (PIRS) by Lane, Robertson, and Wehby (2002; Lane, Kalberg, Bruhn, Driscoll, Wehby, & Elliott, 2009) is an adapted version of the Intervention Rating Profile-15 (Witt & Elliott, 1985), a widely used measure of social validity. Social validity refers to stakeholders’ views on the social significance of the intervention goals, acceptability of the intervention procedures, and the social importance of the intervention outcomes (Wolf, 1978). The PIRS is a brief, individual-completed rating scale designed to assess social validity of primary prevention plans prior to intervention onset (pre-intervention) and after the primary prevention plan has been put in place (implementation). To complete, each individual from the school site who has reviewed the Ci3T primary prevention plan (or other primary prevention plan) rates his or her opinion on 17 items using a 6-point Likert-type scale ranging from (1) strongly disagree to (6) strongly agree, and then completes four open-ended questions regarding suggestions for changes to the primary plan, perceptions of student performance as a result of implementing the primary plan, and what is felt to be the least and most beneficial components of the primary plan. The PIRS take approximately 10 minutes to complete. Please see Lane, Oakes, and Magill (2014) in the volume 58, issue 3, 2014 special issue of Preventing School Failure, Designing, Implementing, and Evaluating Comprehensive, Integrated, Three-Tiered Models of Prevention: A Step-by-Step Guide for additional details and a sample of the school level report of results.
Download PIRS for implementation years: Paper form.
Student Risk Screening Scale (SRSS)
The STUDENT RISK SCREENING SCALE (SRSS) is a free-access tool, originally developed to detect elementary students at risk of anti-social behavior patterns (Drummond, 1994). Students are individually rated on seven items using a 4-point Likert-type scale: never = 0, occasionally = 1, sometimes = 2, frequently = 3. These items are: (1) steal; (2) lie, cheat, sneak; (3) behavior problem; (4) peer rejection; (5) low academic achievement; (6) negative attitude; and (7) aggressive behavior. Once the items are summed for each student, the student’s total score falls into one of three categories of risk: low (0-3), moderate (4-8), or high (9-21). Supports (secondary, tertiary) are considered for students based on the category of their individual score. Students in the moderate or high risk categories should be considered by educators for additional supports and interventions at the school-site. Additionally, aggregated SRSS data can be utilized to monitor the level of risk over time within grade levels, school buildings, or a district (Lane, Kalberg, Bruhn, Mahoney, & Driscoll, 2008). Since its initial development, the SRSS has been validated for use at the elementary (e.g., Drummond, Eddy, & Reid, 1998), middle (Lane, Parks, Kalberg, & Carter, 2007) and high (Lane, Kalberg, Parks, & Carter, 2008; Lane, Oakes, Ennis, Cox, Schatschneider, & Lambert, 2013) school levels. Furthermore, this screening tool has been found to be socially valid (e.g., Lane, Bruhn, Eisner, & Kalberg, 2010) and psychometrically sound (Lane, Little, et al., 2009). Specifically, Lane, Little, et al., (2009) found the SRSS total scores improved chance estimates by 45% for detecting students with externalizing (e.g., aggressive, noncompliant) behavior patterns and 30% for detecting students with internalizing (e.g., anxious, socially withdrawn) behavior patterns. The SRSS has been used widely in studies to (a) determine the responsiveness of students within the context of multi-tiered models of prevention, (b) illustrate how a school’s risk shifts over time, and (c) to identify students in need of secondary (Tier 2) or tertiary (Tier 3) supports.
For more information, examples, and a voiced-over PowerPoint presentation by Kathleen Lane, visit Michigan’s integrated behavior and learning support initiative site.
The Student Risk Screening Scale – Internalizing and Externalizing (SRSS-IE) – is an adapted version of the SRSS (Drummond, 1994) and is available free-access. The SRSS-IE modifies the original 7-item SRSS with the inclusion of 5 new items characteristics of internalizing behaviors. The SRSS-IE includes the original seven items and the new five items with all items rated on the same 4-point Likert-type scale: never = 0, occasionally = 1, sometimes = 2, frequently = 3. For more information on the new secondary level (middle and high school) preliminary cut scores, please consider reading the SRSS-IE MS HS Brief.
Items include the original 7 items (1) steal; (2) lie, cheat, sneak; (3) behavior problem; (4) peer rejection; (5) low academic achievement; (6) negative attitude; and (7) aggressive behavior, and newly added and retained items (8) emotionally flat; (9) shy, withdrawn; (10) sad, depressed; (11) anxious; and (12) lonely.
We encourage you to read the following articles to learn more about how to use the SRSS-IE at elementary as well as middle and high schools.
Note. The following preliminary cut scores are now available for the SRSS-I (elementary: SRSS-I5, middle and high: SRSS-I6). Please note the item peer rejection is included in the SRSS-E7 and SRSS-I6 when used at the middle and high school level. The two subscale scores are used for decision making.
|Elementary School||Middle and High School|
(2) lie, cheat, sneak;
(3) behavior problem;
(4) peer rejection;
(5) low academic achievement;
(6) negative attitude;
(7) aggressive behavior
|(8) emotionally flat; |
(9) shy, withdrawn;
(10) sad, depressed;
(2) lie, cheat, sneak;
(3) behavior problem;
(4) peer rejection;
(5) low academic achievement;
(6) negative attitude;
(7) aggressive behavior
|(4) peer rejection;
(8) emotionally flat;
(9) shy, withdrawn;
(10) sad, depressed;
|0-3 = low risk|
4-8 = moderate risk
9-21 = high risk
|0-1 = low risk|
2-3 = moderate risk
4-15 = high risk
|0-3 = low risk|
4-8 = moderate risk
9-21 = high risk
|0-3 = low risk
4-5 = moderate risk
6-18 = high risk
Elementary School Level:
Lane, K. L., Oakes, W. P., Swogger, E. D., Schatschneider, C., Menzies, H., M., & Sanchez, J. (2015). Student risk screening scale for internalizing and externalizing behaviors: Preliminary cut scores to support data-informed decision making. Behavioral Disorders, 40, 159-170.
Middle and High School Levels:
Lane, K. L., Oakes, W. P., Cantwell, E. D., Schatschneider, C., Menzies, H., Crittenden, M., & Messenger, M. (in press). Student Risk Screening Scale for Internalizing and Externalizing Behaviors: Preliminary cut scores to support data-informed decision making in middle and high schools. Behavioral Disorders
School level teams use these scores along with other school-collected data (e.g., curriculum-based measures of reading, math, and writing; course failures; office discipline referrals; attendance patterns) to inform instruction and make decisions regarding student needs for more intensive supports (i.e., Tier 2 or Tier 3; see Lane, Oakes, Ennis & Hirsh, 2014).
Download the SRSS-IE in MS-Excel format. Note: This file contains 3 tabs, one for elementary schools, one for middle/high schools, and one containing important notes.
For more information about SRSS-IE screening protocols at the district level and site level, and summary sheets, visit our systematic screening page.
Behavior Assessment System for Children 3rd Edition: Behavioral and Emotional Screening System (BASC-3: BESS)
The BEHAVIOR ASSESSMENT SYSTEM FOR CHILDREN 2ND EDITION: BEHAVIORAL & EMOTIONAL SCREENING SYSTEM (BASC-3: BESS; Kamphaus & Reynolds, 2007) is a brief, universal screening system for measuring behavioral and emotional strengths and weaknesses in children and adolescents preK-12 in a reliable, quick, systematic way. Forms can be scored by hand or completed as part of AIMSweb assessment and data management system. The BASC-3: BESS was designed to identify kids with behavioral or emotional patterns that impact academic achievement or social relationships. Keeping in mind that students behave differently for different people and in different settings or contexts, the BASC-3: BESS relies on multiple informants (teacher, parent, student) to obtain a comprehensive understanding of student strengths and weaknesses. Behavioral areas assessed include (a) Internalizing problems, (b) Externalizing problems, (c) School problems, and (d) Adaptive skills. The BASC-3: BESS includes three forms that can be used individually or in combination (Spanish-language versions available for parent and student forms): (a) Teacher: Preschool and Child/Adolescent, (b) Student self-report: Child/Adolescent, and (c) Parent: Preschool and Child/Adolescent.
Social, Academic, & Emotional Behavior Risk Screener (SAEBRS)
The SOCIAL, ACADEMIC, & EMOTIONAL BEHAVIOR RISK SCREENER (SAEBRS; Kilgus, Chafouleas, Riley-Tillman, & von der Embse, 2013) is a brief (1-3 minutes per student) universal screening tool for behavioral and emotional risk designed for use across the K-12 grade span. The measure consists of 19 items divided between 3 scales: social behavior (6 items), academic behavior (6 items), emotional behavior (7 items). Research suggests the SAEBRS may be used to evaluate overall general behavior, as assessed by the total of the items Total Behavior (19 items). Raters identify how frequently the student has displayed behaviors during the previous month using the following 4-point Likert-type scale: 0 = Never, 1 = Sometimes, 2 = Often, 3 = Almost Always. Prior to scoring, negatively worded items are reversed scored.
– Strengths and Difficulties Questionnaire (SDQ)
The STRENGTHS AND DIFFICULTIES QUESTIONNAIRE (SDQ; Goodman, 1997) is a validated and widely used free-access systematic screening tool with elementary and secondary versions for use with students ages 2 to 17 (as of June 2014; based on the performance of the SDQ with 17-year-olds being similar to that in 15- and 16-year-olds and good psychometric properties in 2-year-olds; see SDQinfo.com for more information). Appropriate use of screening tools such as the SDQ allow schools to determine how students are progressing over time and to collect reliable data to draw accurate conclusions about how different types of students (e.g., students receiving general education, students receiving special education) are responding (Lane, Wehby, Robertson, & Rogers, 2007). The 25-item SDQ screening tool is available in many languages and assesses a broad set of behavioral domains and comes in an early years version (ages 2-4) and a standard version for elementary (ages 4-10) or secondary (ages 11-17). One page is completed on each student, asking for about 25 positive (e.g., “Considerate of other people’s feelings”) and negative (e.g., “Fears or easily scared”) attributes rated on a 3-point Likert-type scale where 0 = not true, 1 = somewhat true, and 2 = certainly true to determine occurrence of each behavior during the last six months or the current school year. The 25 attributes divide between five scales:
1. Emotional Symptoms
2. Conduct Problems
3. Hyperactivity / Inattention
4. Peer Relationship Problems
5. Prosocial Behavior
and Total Difficulties (sum of first four scales)
Systematic Screening for Behavior Disorders (SSBD, 2nd Edition)
The SYSTEMATIC SCREENING FOR BEHAVIOR DISORDERS (SSBD; Walker, Severson, & Feil, 2014) 2nd edition is validated for grades PK-9 and a widely used multi-gated universal screening tool designed to find students who have either internalizing or externalizing patterns of behavior. Many consider the SSBD to be the “gold standard” in screening. This teacher completed tool has three stages: (1) nomination and rank ordering, (2) behavior scales, and (3 optional for universal screening purposes) the School Archival Records Search (SARS) and direct observations of student behavior (classroom and playground). With the updated 2014 2nd edition, screening with the SSBD is available online or in a paper format. Screening a class of students takes less than one hour to complete. In stage one, the classroom teacher considers all students’ behavior patterns using the descriptions of internalizing and externalizing behavior dimensions, selecting the three students with behaviors most like the each description. These six students (three for externalizing and three for internalizing) pass through Gate 1 on to stage two. In stage two, the six students are then rated by the teacher on behavior scales: Critical Events Index and Combined Frequency Index of Adaptive and Maladaptive Behavior. Students who exceed normative criteria on these two indices pass through Gate 2 on to stage three. In stage 3, a trained professional directly observes students in an instructional settings (academic engaged time) and on the playground (positive social behavior). The preschool scales (Early Screening Project, ESP) vary slightly in terms of behavior scales and observation setting.
Social Skills Improvement System – Performance Screening Guide (SSIS-PSG)
The SOCIAL SKILLS IMPROVEMENT SYSTEM – PERFORMANCE SCREENING GUIDE (SSiS-PSG; Elliott & Gresham, 2008) is an efficient screening tool for Preschool-Secondary. The SSiS–PSG is one tool in a family of products for screening, assessment, instruction, and intervention. The SSiS-PSG screens four skill domains: Prosocial Behavior, Motivation to Learn, Reading Skills, and Math Skills. Educators rate each student’s skills using a 5-level descriptive criterion. Educators are able to quickly assess results corresponding to the color band for each domain – green band indicating the student is at or above expected levels of functioning for their age, yellow band indicating moderate concern, and red band indicating a high level of concern. Then educators list, on the last page of the screening tool, the students in each of the yellow and red risk band and determine appropriate interventions with the support of their school teams.
- Bruhn, A. L. (2011). Measuring primary plan treatment integrity of comprehensive, integrated three-tiered prevention models. (Unpublished doctoral dissertation). Vanderbilt University, Nashville, TN.
- Elliott, S. N., & Gresham, F. M. (2008). Social Skills Improvement System (SSiS): Performance Screening Guides. San Antonio, TX: PsychCorp Pearson Education.
- Feil, E. G., Walker, H. M., & Severson, H. H. (1995). The Early Screening Project for young children with behavior problems: Research and development of the early screening project. Journal of Emotional and Behavioral Disorders, 3(4), 194-202.
- Lane, K. L. (2002). Primary Prevention Plan: Feedback Form. Unpublished rating scale.
- Lane, K. L. (2009a). Teacher self-report form. Unpublished instrument.
- Lane, K. L. (2009b). CI3T treatment integrity: Direct observation tool. Unpublished instrument.
- Lane, K. L., Kalberg, J. R., Bruhn, A. L., Driscoll, S. A., Wehby, J. H., & Elliott, S. (2009). Assessing social validity of school-wide positive behavior support plans: Evidence for the reliability and structure of the Primary Intervention Rating Scale. School Psychology Review, 38, 135-144.
- Lane, K. L., Kalberg, J. R., Bruhn, A. L., Mahoney, M. E., & Driscoll, S. A. (2008). Primary Prevention Programs at the Elementary Level: Issues of Treatment Integrity, Systematic Screening, and Reinforcement. Journal of Education and Treatment of Children. 31, 466-494.
- Lane, K. L., Little, M. A., Casey, A. M., Lambert, W., Wehby, J., Weisenbach, J. L., & Phillips, A. (2009). A comparison of systematic screening tools for emotional and behavioral disorders. Journal of Emotional and Behavioral Disorders, 17(2), 93-105. DOI: 10.1177/1063426608326203
- Lane, K. L., Oakes, W. P., Common, E. A., Zorigian, K., Brunsting, N. C., & Schatschneider, C. (2014). A comparison between SRSS-IE and SSiS-PSG scores: Examining convergent validity. Assessment for Effective Intervention, 1-13. doi: 10.1177/1534508414560346
- Lane, K. L., Oakes, W. P., Ennis, R. P., & Hirsch, S. E. (2014). Identifying students for secondary and tertiary prevention efforts: How do we determine which students have Tier 2 and Tier 3 needs?Preventing School Failure, 58, 171-182.
- Lane, K. L., Oakes, W. P., Harris, P. J., Menzies, H. M., Cox, M., & Lambert, W. (2012). Initial evidence for the reliability and validity of the student risk screening scale for internalizing and externalizing behaviors at the elementary level. Behavioral Disorders, 99-122.
- Lane, K. L., Oakes, W. P., Jenkins, A., Menzies, H. M., & Kalberg, J. R. (2014). A team-based process for designing comprehensive, integrated, three-tiered (CI3T) models of prevention: How does my school-site leadership team design a CI3T model? Preventing School Failure: Alternative Education for Children and Youth, 58 (3), 129-142. DOI: 10.1080/1045988X.2014.893976
- Lane, K. L., Oakes, W. P., & Magill, L. (2014). Primary prevention efforts: How do we implement and monitor the tier 1 component of our comprehensive, integrated, three-tiered (ci3t) model?Preventing School Failure: Alternative Education for Children and Youth, 58(3), 143-158. doi: 10.1080/1045988X.2014.893978
- Lane, K. L., Oakes, W. P., Swogger, E. D., Schatschneider, C., Menzies, H., M., & Sanchez, J. (2014). Student risk screening scale for internalizing and externalizing behaviors: Preliminary cut scores to support data-informed decision making. Behavioral Disorders.
- Lane, K. L., Robertson, E. J., & Wehby, J. H. (2002). Primary Intervention Rating Scale. Unpublished rating scale.
- Lynass, L., Tsai, S., Richmond, T., & Cheney, D. (2012). Social expectations and behavioral indicators in schoolwide positive behavior supports: A national study of behavior matrices. Journal of Positive Behavior Interventions, 14, 153-161.
- Oakes, W.P., Lane, K.L., Jenkins, A., & Booker, B.B. (2013). Three-tiered models of prevention: Teacher efficacy and burnout. Education and Treatment of Children, 36(4), 95-126.
- Walker, H. M., Severson, H. H., & Feil, E. G. (2014). Systematic screening for behavior disorders (SSBD) technical manual: Universal screening for preK–9 (2nd ed.). Eugene, OR: Pacific Northwest Publishing.