NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers1
What Works Clearinghouse Rating
Showing 1 to 15 of 26 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Kang, Hyeon-Ah; Zheng, Yi; Chang, Hua-Hua – Journal of Educational and Behavioral Statistics, 2020
With the widespread use of computers in modern assessment, online calibration has become increasingly popular as a way of replenishing an item pool. The present study discusses online calibration strategies for a joint model of responses and response times. The study proposes likelihood inference methods for item paramter estimation and evaluates…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Response Theory, Reaction Time
Steven J. Carter; Matthew P. Wilcox; Neil J. Anderson – Reading in a Foreign Language, 2023
This research presents a novel reading fluency (rf) measurement formula that accounts for both reading rate and comprehension. Possible formulas were investigated with 68 participants in a strategic reading course in an IEP at a small Pacific Island university. The selected formula's scores demonstrated concurrent validity through strong…
Descriptors: Second Language Learning, Silent Reading, Reading Fluency, Reading Comprehension
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Jason Schoeneberger; Xiaodong Zhang; Samantha Spinney; Jing Sun; Lauren Kennedy; Samira Rajesh Syal – Grantee Submission, 2023
The purpose of this study was to understand the impact, implementation and costs associated with a one-semester elective lab course in 9th grade, Accelerating Literacy for Adolescents (ALFA) Lab, which seeks to improve students' reading achievement, particularly for those from economically disadvantaged communities. This study used three cohorts…
Descriptors: High School Students, Grade 9, Learning Laboratories, Reading Centers
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kim, Sooyeon; Moses, Tim – ETS Research Report Series, 2016
The purpose of this study is to evaluate the extent to which item response theory (IRT) proficiency estimation methods are robust to the presence of aberrant responses under the "GRE"® General Test multistage adaptive testing (MST) design. To that end, a wide range of atypical response behaviors affecting as much as 10% of the test items…
Descriptors: Item Response Theory, Computation, Robustness (Statistics), Response Style (Tests)
OECD Publishing, 2019
The OECD Programme for International Student Assessment (PISA) examines what students know in reading, mathematics and science, and what they can do with what they know. It provides the most comprehensive and rigorous international assessment of student learning outcomes to date. Results from PISA indicate the quality and equity of learning…
Descriptors: Test Results, Achievement Tests, Foreign Countries, International Assessment
Northwest Evaluation Association, 2015
Measures of Academic Progress® (MAP®) computer adaptive interim assessments serve many purposes, from informing instruction to identifying students for intervention to projecting proficiency on state accountability assessments. To make sure its flagship product does the latter, Northwest Evaluation Association™ (NWEA™) routinely conducts studies…
Descriptors: Achievement Tests, Computer Assisted Testing, Adaptive Testing, Scores
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Foorman, Barbara; Espinosa, Anabel; Wood, Carla; Wu, Yi-Chieh – Regional Educational Laboratory Southeast, 2016
A top education priority in the United States is to address the needs of one of the fastest growing yet lowest performing student populations--English learner students (Capps et al., 2005). English learner students come from homes where a non-English language is spoken and need additional academic support to access the mainstream curriculum. These…
Descriptors: Computer Assisted Testing, Adaptive Testing, Literacy, English Language Learners
Northwest Evaluation Association, 2014
In order to determine which test is more appropriate to administer to elementary grade students, it is important to consider the purpose of the test in conjunction with the ability and grade level of the students. Northwest Evaluation Association™ (NWEA™) designed Measures of Academic Progress® (MAP®) tests mindful of the amount of learning that…
Descriptors: Elementary School Students, Achievement Tests, Mathematics Achievement, Reading Achievement
Foorman, Barbara R.; Petscher, Yaacov; Schatschneider, Chris – Florida Center for Reading Research, 2015
The FAIR-FS consists of computer-adaptive reading comprehension and oral language screening tasks that provide measures to track growth over time, as well as a Probability of Literacy Success (PLS) linked to grade-level performance (i.e., the 40th percentile) on the reading comprehension subtest of the Stanford Achievement Test (SAT-10) in the…
Descriptors: Reading Instruction, Screening Tests, Reading Comprehension, Oral Language
Partnership for Assessment of Readiness for College and Careers, 2016
The Partnership for Assessment of Readiness for College and Careers (PARCC) is a state-led consortium designed to create next-generation assessments that, compared to traditional K-12 assessments, more accurately measure student progress toward college and career readiness. The PARCC assessments are aligned to the Common Core State Standards…
Descriptors: Standardized Tests, Career Readiness, College Readiness, Test Validity
Christensen, Laurene L.; Albus, Debra A.; Liu, Kristin K.; Thurlow, Martha L.; Kincaid, Aleksis – National Center on Educational Outcomes, 2013
English language learners (ELLs) with disabilities are required to participate in all state and district assessments similar to their peers without disabilities. This includes assessments used for the Elementary and Secondary Education Act (ESEA) Title I accountability purposes for demonstrating proficiency in academic content, assessments used…
Descriptors: English Language Learners, State Policy, Disabilities, Student Participation
Krass, Iosif A.; Thomasson, Gary L. – 1999
New items are being calibrated for the next generation of the computerized adaptive (CAT) version of the Armed Services Vocational Aptitude Battery (ASVAB) (Forms 5 and 6). The requirements that the items be "good" three-parameter logistic (3-PL) model items and typically "like" items in the previous CAT-ASVAB tests have…
Descriptors: Adaptive Testing, Algorithms, Computer Assisted Testing, Nonparametric Statistics
Bowles, Ryan; Pommerich, Mary – 2001
Many arguments have been made against allowing examinees to review and change their answers after completing a computer adaptive test (CAT). These arguments include: (1) increased bias; (2) decreased precision; and (3) susceptibility of test-taking strategies. Results of simulations suggest that the strength of these arguments is reduced or…
Descriptors: Adaptive Testing, Algorithms, Computer Assisted Testing, Review (Reexamination)
Glas, Cees A. W.; Vos, Hans J. – 1998
A version of sequential mastery testing is studied in which response behavior is modeled by an item response theory (IRT) model. First, a general theoretical framework is sketched that is based on a combination of Bayesian sequential decision theory and item response theory. A discussion follows on how IRT based sequential mastery testing can be…
Descriptors: Adaptive Testing, Bayesian Statistics, Item Response Theory, Mastery Tests
PDF pending restoration PDF pending restoration
Green, Bert F. – 2002
Maximum likelihood and Bayesian estimates of proficiency, typically used in adaptive testing, use item weights that depend on test taker proficiency to estimate test taker proficiency. In this study, several methods were explored through computer simulation using fixed item weights, which depend mainly on the items difficulty. The simpler scores…
Descriptors: Adaptive Testing, Bayesian Statistics, Computer Assisted Testing, Computer Simulation
Previous Page | Next Page »
Pages: 1  |  2