ReadBasix, previously known as RISE or SARA, is the most researched diagnostic assessment in the market, based on over two decades of research by a team of distinguished reading scientists, assessment researchers, and reading intervention practitioners at ETS and the SERP Institute. To fully appreciate the depth and academic importance of this product, we briefly review the history of ReadBasix and cite relevant research.
In Fall 2022 Capti, ETS, and MetaMetrics conducted a Lexile alignment study with almost 4,000 students in grades 3-12 in 18 schools across the country. The students completed ReadBasix and Metametrics Lexile assessments. The data obtained in the study enabled alignment between ReadBasix and Lexile. In Spring 2023, full support for reporting Lexile measures on the basis of student's performance in 3 of 6 ReadBasix subtests (sentence processing, reading efficiency, and reading comprehension) was rolled out in Capti Assess.
In 2020, Capti became the official ETS distributor and made ReadBasix available for sale. Since its release ReadBasix has been adopted by numerous school districts across the U.S.
In 2018, Capti partnered with ETS to bring ReadBasix to market. Capti integrated ReadBasix into its Capti Assess platform and iteratively tested and improved the product.
In 2016, the team of reading scientists, assessment researchers, and reading intervention practitioners at ETS and the SERP Institute performed a national norming study in grades 3-12, after which the RISE (Reading Inventory and Scholastic Evaluation) evolved into its current form under the name ReadBasix.
In 2012, field tests of RISE (Reading Inventory and Scholastic Evaluation) expanded to include a large district in Maryland. The tests allowed ETS and the SERP Institute to refine RISE based on user feedback and analysis of the data.
In 2010, ETS was awarded an assessment grant under the Reading for Understanding Initiative funded by IES at the U.S. Department of Education. This funding allowed for the expansion of the assessment to include more grade levels including elementary school starting from Grade 3 to high school.
In 2007, one of the first large-scale administrations of the RISE (Reading Inventory and Scholastic Evaluation) occurred in a school district in Massachusetts. This allowed RISE to be field tested for the first time with students in entire middle schools.
In 2004, Dr. John Sabatini at the Educational Testing Service (ETS) began creating the RISE (Reading Inventory and Scholastic Evaluation) assessment, the predecessor to ReadBasix. This work was spurred through collaborative work with the Strategic Education Research Partnership (SERP) Institute, a group that closely works with school districts across the U.S. School districts in Massachusetts noticed that many of their middle school students were arriving in 6th grade with weak reading skills, but the schools were not equipped to identify students’ exact reading skills weaknesses—or what to do about them. RISE was initially designed specifically for middle school students, to give schools the information they needed to help struggling readers. The project was funded by grants from SERP, Carnegie, and Lila Wallace.
This is the third and most recent edition of the technical report for the ReadBasix (SARA / RISE) assessment battery. This report expands the first and second reports by featuring a national sample of students from grades 3-12 (the first report had grades 6-8; the second one had grades 5-10). This report includes a theoretical overview of the battery of assessments including a subtest for each foundational skill: word recognition and decoding, vocabulary, morphology, sentence processing, and reading efficiency, and for basic reading comprehension. The report includes psychometric analyses, item response theory scaling study, evaluation of multidimensionality, validity evidence, evaluation of differential item functioning for gender, and race/ethnicity.
The second edition of the technical report on the ReadBasix (SARA / RISE) assessment battery expands the first report by featuring grades 5-10 (the original had grades 6-8). Included in this report are analyses for each subtest (word recognition and decoding, vocabulary, morphology, sentence processing, and reading efficiency, and basic reading comprehension), psychometric analysis of parallel forms of each subtest, results of item response theory scaling studies for each subtest across the entire grade span, and evaluation of differential item functioning for gender, and race/ethnicity.
This is the first technical report on the ReadBasix assessment (SARA / RISE). ReadBasix was originally designed for struggling readers in middle school because teachers within a large, urban district wanted more information about why their students were struggling to read. The battery of assessments includes a subtest for each foundational skill: word recognition and decoding, vocabulary, morphology, sentence processing, and reading efficiency, as well as for basic reading comprehension. This report details the research base that supports the design and development of the reading skills components battery, and describes a pilot study with students in grades 6-8.
This article presents research suggesting high school students’ academic knowledge is highly predictive of traditional comprehension assessments, which require identifying information and drawing inferences from single texts, but less so for scenario-based assessments, which call for integrating, evaluating, and applying information across multiple sources. Within the study, a shortened version of three ReadBasix subtests (vocabulary, morphology and sentence processing) all strongly predicted academic knowledge (r’s .43 - .57), and reading comprehension on both a traditional comprehension test (r’s .56 - .57) and a scenario-based comprehension test (r’s .50 - .54). The strength of relation between ReadBasix to either comprehension test was comparable to the relation between the two comprehension tests (r = .57). Results demonstrated that ReadBasix subtests are valid indicators of students’ academic achievement, single text comprehension, and scenario-based multiple-text comprehension.
This article presents research from two studies that compared poor and normal decoders’ processing times on real words, pseudo-homophones, and nonwords (Study 1), and evaluated how a processing time difference is associated with rates of decoding development (Study 2). The results suggest that poor decoders spend more time recognizing real words and pseudo-homophones, but less time on non-words, whereas normal decoders spend more time decoding non-words. The researchers concluded that poor decoders may be trapped in a vicious cycle where poor decoding skill combined with less time spent attempting to decode novel words interferes with decoding development.
This article presents research from two studies that examined the relation between decoding and reading comprehension with middle and high school students. Using prominent reading theories as a basis, the authors propose the Decoding Threshold Hypothesis, which suggests the relation between decoding and reading comprehension can only be reliably observed above a certain decoding threshold. In Study 1, the Decoding Threshold Hypothesis was tested. Researchers found a reliable decoding threshold value below that there was no relation between decoding and reading comprehension, and above which the two measures showed a positive linear relation. Study 2 examined a longitudinal analysis of reading comprehension growth as a function of initial decoding status. Results showed that scoring below the decoding threshold was associated with stagnant growth in reading comprehension, and above demonstrated accelerating reading comprehension growth from grade to grade.
This article describes an early conception of ReadBasix designed to measure six component and integrated reading skills and determine the assessment’s fit into an RTI framework. Aligning ReadBasix with the research in cognitive science, reading and learning allowed researchers to create an assessment that can help identify weakness in each of the six foundational skills. Additionally, the battery was found to be more predictive for students who were struggling readers. From the information provided by the assessment’s results, educators can make more informed decisions about who needs help, what help is needed, and whether the instructional support is effective.
This article presents a developmentally sensitive reading comprehension assessment grounded in a scenario-based assessment paradigm, which was designed to meet the evolving construct of reading comprehension. Evidence for the concurrent validity of ReadBasix is included. The authors found the ReadBasix comprehension subtest to be correlated with external measures of reading comprehension, specifically the Gates-MacGinitie reading test and the scenario-based assessment. The correlation between the ReadBasix comprehension subtest and the scenario-based assessment of reading comprehension is important because the scenario-based assessment requires higher level comprehension constructs and shows that higher level constructs are related to foundational comprehension as measured by ReadBasix.
This research study examined the effect of reading purpose on participants’ reading behaviors using eye-tracking technologies. Proficient undergraduate students read four passages; two required participants to write a summary, and two required answering multiple choice questions. Results indicated that more time was spent constructing a coherent mental model of text content (deep comprehension) when the purpose for reading included a written summary as compared to only answering multiple choice questions. This study provided evidence for content validity of the ReadBasix assessment because reading relevant parts of passages facilitated answering comprehension questions.
This research study investigated how individual differences interacted with task requirements utilizing eye tracking technologies to measure undergraduate students’ reading efficiency. Researchers found that participants spent more time reading when the task required a written summary as compared to when the task required only answering multiple choice questions. The time spent reading benefitted students who had relatively low reading efficiency as they were able to answer the multiple choice questions more efficiently after writing a summary. The results provide structural validity of ReadBasix by showing convergence in reading comprehension, fluency, and summary writing measures.
This study presents data from two measures that were designed to provide a more holistic picture of reading comprehension. The measures include the Reading Inventory and Scholastic Evaluation (RISE), now known as ReadBasix, and Global, Integrated Scenario-Based Assessment (GISA), now known as ReadAuthentix in the Capti Assess suite of assessments. The results show that each subtest on ReadBasix predicted unique variance on ReadAuthentix. Further, this study provides evidence for measuring foundational reading skills, five subtests of ReadBasix, when assessing reading comprehension because lower level foundational skills may impede comprehension.
The primary purpose of this study was to link the ReadBasix Sentence Processing, Reading Efficiency, and Reading Comprehension Subtests to the Lexile Framework for Reading. ReadBasix Subtest scale scores can now be used to present a solution for matching students with text and information that can leverage tools such as the Lexile “Find A Book” to answer questions related to standards, test score interpretation, and test validation. A predictive function was constructed to transform ReadBasix Sentence Processing, Reading Efficiency, and Reading Comprehension subtest scale scores to Lexile reading measures. The regression approach allows for a profile of ReadBasix scores to be combined to predict a Lexile reading measure, rather than a multitude of functions for each subtest.
This article presents evidence to suggest potential thresholds in foundational reading skills that may limit college students’ reading comprehension on both close and applied literacy tasks. This research extends the work of Wang, Sabatini, O’Reilly, and Weeks (2019) that found students’ growth in reading comprehension conditional on their decoding scores to explore whether there are thresholds in foundational skills that may limit reading comprehension for college students. The study included students who were determined to be underprepared for college and assigned to developmental literacy programs, and others who were determined to be prepared for college. The findings suggest that there are thresholds for foundational reading skills—decoding/word recognition, morphological knowledge, and sentence processing—that had implications for students’ inclination to engage in the reading comprehension strategies of paraphrasing, bridging, and elaborating (all higher level literacy tasks). Students who fell below the thresholds demonstrated a lower level of employing reading strategies when compared to those who above the thresholds. These are important findings as they highlight problems with foundational reading skills that may persist into college.
This article shares research on READI, a reading intervention designed to increase students’ reading comprehension. The Reading Inventory and Scholastic Evaluation (RISE), also known as ReadBasix, was used as the pretest and the Global, Integrated Scenario-Based (GISA), now known as ReadAuthentix, was used as the posttest. Both ReadBasix and ReadAuthentix are part of the Capti Assess suite of assessments. Ninth-graders’ performance on the comprehension measures suggests that the skills measured on ReadBasix are related to the deep comprehension required by ReadAuthentix.
This article shares research on the Strategic Adolescent Reading Intervention (STARI), which was designed as a supplemental reading program based on peer- and discussion-based instruction that supports word-reading skills, fluency, vocabulary, and comprehension. ReadBasix (formerly known as RISE) was used to measure success of the intervention based on students’ scores. The results from 6th to 8th grade students indicate that the skills assessed by ReadBasix can be improved from targeted reading interventions such as STARI.
This research article shares supporting evidence for the vast amount of variance in reading comprehension being attributed to oral language, specifically lexical knowledge. The findings differ from the Simple View of Reading proposed by Gough and Tunmer (1986), which suggest it is decoding and language comprehension that contribute to reading comprehension. The study also provides evidence for the concurrent validity of ReadBasix as the component subtests were predictive of reading comprehension. ReadBasix subtests, specifically the vocabulary and morphology, correlated with the Gates-MacGinitie reading test.
The R&D of ReadBasix was supported by the Institute of Education Sciences, U.S. Department of Education, through Grant R305F100005 to the Educational Testing Service (ETS) as part of the Reading for Understanding Research (RFU) Initiative, as well as through the Small Business Innovation Research (SBIR) program contracts 91990019C0024, 91990021C0029, and 91990022C0042 to Charmtech Labs LLC. The opinions expressed are those of the authors and do not represent views of the Institute or the U.S. Department of Education.