NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 31 to 45 of 1,333 results Save | Export
Benjamin W. Domingue; Klint Kanopka; Ben Stenhaug; James Soland; Megan Kuhfeld; Steve Wise; Chris Piech – Grantee Submission, 2021
The more frequent collection of response time data is leading to an increased need for an understanding of how such data can be included in measurement models. Models for response time have been advanced, but relatively limited large-scale empirical investigations have been conducted. We take advantage of a large dataset from the adaptive NWEA MAP…
Descriptors: Achievement Tests, Reaction Time, Reading Tests, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Benjamin W. Domingue; Klint Kanopka; Ben Stenhaug; James Soland; Megan Kuhfeld; Steve Wise; Chris Piech – Journal of Educational Measurement, 2021
The more frequent collection of response time data is leading to an increased need for an understanding of how such data can be included in measurement models. Models for response time have been advanced, but relatively limited large-scale empirical investigations have been conducted. We take advantage of a large data set from the adaptive NWEA…
Descriptors: Achievement Tests, Reaction Time, Reading Tests, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Barrett, Michelle D.; Jiang, Bingnan; Feagler, Bridget E. – International Journal of Artificial Intelligence in Education, 2022
The appeal of a shorter testing time makes a computer adaptive testing approach highly desirable for use in multiple assessment and learning contexts. However, for those who have been tasked with designing, configuring, and deploying adaptive tests for operational use at scale, preparing an adaptive test is anything but simple. The process often…
Descriptors: Adaptive Testing, Computer Assisted Testing, Test Construction, Design Requirements
Peer reviewed Peer reviewed
Direct linkDirect link
Falk, Carl F.; Feuerstahler, Leah M. – Educational and Psychological Measurement, 2022
Large-scale assessments often use a computer adaptive test (CAT) for selection of items and for scoring respondents. Such tests often assume a parametric form for the relationship between item responses and the underlying construct. Although semi- and nonparametric response functions could be used, there is scant research on their performance in a…
Descriptors: Item Response Theory, Adaptive Testing, Computer Assisted Testing, Nonparametric Statistics
Iwuoha-Njoku, Ogechi – ProQuest LLC, 2022
This study employed a quantitative research design to examine teacher assessment literacy and its relationship to external and internal factors. The researcher sought to describe the level of assessment literacy that teachers demonstrate when using score report data from computer-adaptive interim assessment and to analyze whether teachers'…
Descriptors: Assessment Literacy, Computer Assisted Testing, Adaptive Testing, Faculty Development
Peer reviewed Peer reviewed
Direct linkDirect link
Chen, Dandan – International Journal for Educational and Vocational Guidance, 2023
The first two decades of the twenty-first century witnessed the blossoming of frameworks to conceptualize "21st-century skills" both in school and career. However, there is still a lack of consensus about what we are talking about when using the term "21st-century skills," as the existing frameworks on "21st-century…
Descriptors: 21st Century Skills, Research Reports, Educational Research, Trend Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Yigiter, Mahmut Sami; Dogan, Nuri – Measurement: Interdisciplinary Research and Perspectives, 2023
In recent years, Computerized Multistage Testing (MST), with their versatile benefits, have found themselves a wide application in large scale assessments and have increased their popularity. The fact that forms can be made ready before the exam application, such as a linear test, and that they can be adapted according to the test taker's ability…
Descriptors: Programming Languages, Monte Carlo Methods, Computer Assisted Testing, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Lim, Hwanggyu; Choe, Edison M. – Journal of Educational Measurement, 2023
The residual differential item functioning (RDIF) detection framework was developed recently under a linear testing context. To explore the potential application of this framework to computerized adaptive testing (CAT), the present study investigated the utility of the RDIF[subscript R] statistic both as an index for detecting uniform DIF of…
Descriptors: Test Items, Computer Assisted Testing, Item Response Theory, Adaptive Testing
Samira Syal; Marcia Davis; Xiaodong Zhang; Jason Schoeneberger; Samantha Spinney; Douglas J. Mac Iver; Martha Mac Iver – Grantee Submission, 2023
Motivation to read is crucial to improving reading skill. While there is extensive research examining reading motivation among elementary students, with respect to adolescents, research is limited. Employing a person-centered approach can aid in developing a better understanding of adolescent reading motivation and would help address possible…
Descriptors: Reading Motivation, Adolescents, Reading Achievement, High School Students
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ince Araci, F. Gul; Tan, Seref – International Journal of Assessment Tools in Education, 2022
Computerized Adaptive Testing (CAT) is a beneficial test technique that decreases the number of items that need to be administered by taking items in accordance with individuals' own ability levels. After the CAT applications were constructed based on the unidimensional Item Response Theory (IRT), Multidimensional CAT (MCAT) applications have…
Descriptors: Adaptive Testing, Computer Assisted Testing, Simulation, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Lewis, Jennifer; Lim, Hwanggyu; Padellaro, Frank; Sireci, Stephen G.; Zenisky, April L. – Educational Measurement: Issues and Practice, 2022
Setting cut scores on (MSTs) is difficult, particularly when the test spans several grade levels, and the selection of items from MST panels must reflect the operational test specifications. In this study, we describe, illustrate, and evaluate three methods for mapping panelists' Angoff ratings into cut scores on the scale underlying an MST. The…
Descriptors: Cutting Scores, Adaptive Testing, Test Items, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Daocheng Hong – Interactive Learning Environments, 2024
The digital transformation of education is greatly accelerating in various computer-supported applications. As a particularly prominent application of the human-machine interactive system, intelligent learning systems aim to capture users' current intentions and provide recommendations through real-time feedback. However, we have a limited…
Descriptors: Feedback (Response), Users (Information), Learner Engagement, Tests
Montserrat Beatriz Valdivia Medinaceli – ProQuest LLC, 2023
My dissertation examines three current challenges of international large-scale assessments (ILSAs) associated with the transition from linear testing to an adaptive testing design. ILSAs are important for making comparisons among populations and informing countries about the quality of their educational systems. ILSA's results inform policymakers…
Descriptors: International Assessment, Achievement Tests, Adaptive Testing, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Emily R. Forcht; Ethan R. Van Norman – Psychology in the Schools, 2024
The present study compared the diagnostic accuracy of a single computer adaptive test (CAT), Star Reading or Star Math, and a combination of the two in a gated screening framework to predict end-of-year proficiency in reading and math. Participants included 13,009 students in Grades 3-8 who had at least one fall screening score and end-of-year…
Descriptors: Computer Assisted Testing, Adaptive Testing, Diagnostic Tests, Screening Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Lae Lae Shwe; Sureena Matayong; Suntorn Witosurapot – Education and Information Technologies, 2024
Multiple Choice Questions (MCQs) are an important evaluation technique for both examinations and learning activities. However, the manual creation of questions is time-consuming and challenging for teachers. Hence, there is a notable demand for an Automatic Question Generation (AQG) system. Several systems have been created for this aim, but the…
Descriptors: Difficulty Level, Computer Assisted Testing, Adaptive Testing, Multiple Choice Tests
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  89