NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 211 to 225 of 1,333 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Abidin, Aang Zainul; Istiyono, Edi; Fadilah, Nunung; Dwandaru, Wipsar Sunu Brams – International Journal of Evaluation and Research in Education, 2019
Classical assessments that are not comprehensive and do not distinguish students' initial abilities make measurement results far from the actual abilities. This study was conducted to produce a computerized adaptive test for physics critical thinking skills (CAT-PhysCriTS) that met the feasibility criteria. The test was presented for the physics…
Descriptors: Foreign Countries, High School Students, Grade 11, Physics
Peer reviewed Peer reviewed
Direct linkDirect link
Senel, Selma; Kutlu, Ömer – European Journal of Special Needs Education, 2018
This paper examines listening comprehension skills of visually impaired students (VIS) using computerised adaptive testing (CAT) and reader-assisted paper-pencil testing (raPPT) and student views about them. Explanatory mixed method design was used in this study. Sample is comprised of 51 VIS, in 7th and 8th grades. 9 of these students were…
Descriptors: Computer Assisted Testing, Adaptive Testing, Visual Impairments, Student Attitudes
Peer reviewed Peer reviewed
Direct linkDirect link
Lin, Yin; Brown, Anna – Educational and Psychological Measurement, 2017
A fundamental assumption in computerized adaptive testing is that item parameters are invariant with respect to context--items surrounding the administered item. This assumption, however, may not hold in forced-choice (FC) assessments, where explicit comparisons are made between items included in the same block. We empirically examined the…
Descriptors: Personality Measures, Measurement Techniques, Context Effect, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Aybek, Eren Can; Demirtasli, R. Nukhet – International Journal of Research in Education and Science, 2017
This article aims to provide a theoretical framework for computerized adaptive tests (CAT) and item response theory models for polytomous items. Besides that, it aims to introduce the simulation and live CAT software to the related researchers. Computerized adaptive test algorithm, assumptions of item response theory models, nominal response…
Descriptors: Computer Assisted Testing, Adaptive Testing, Item Response Theory, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Albacete, Patricia; Silliman, Scott; Jordan, Pamela – Grantee Submission, 2017
Intelligent tutoring systems (ITS), like human tutors, try to adapt to student's knowledge level so that the instruction is tailored to their needs. One aspect of this adaptation relies on the ability to have an understanding of the student's initial knowledge so as to build on it, avoiding teaching what the student already knows and focusing on…
Descriptors: Intelligent Tutoring Systems, Knowledge Level, Multiple Choice Tests, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Van Norman, Ethan R.; Ysseldyke, James E. – School Psychology Review, 2020
Within multitiered systems of support, assessment practices that limit the amount of time students miss instruction should be prioritized. At the same time, decisions about student response to intervention need to be based upon technically adequate data. We evaluated the impact of data collection frequency and trend estimation method on the…
Descriptors: Data Collection, Adaptive Testing, Computer Assisted Testing, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Hsu, Chia-Ling; Wang, Wen-Chung – Journal of Educational Measurement, 2015
Cognitive diagnosis models provide profile information about a set of latent binary attributes, whereas item response models yield a summary report on a latent continuous trait. To utilize the advantages of both models, higher order cognitive diagnosis models were developed in which information about both latent binary attributes and latent…
Descriptors: Computer Assisted Testing, Adaptive Testing, Models, Cognitive Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Sun, Bo; Zhu, Yunzong; Xiao, Yongkang; Xiao, Rong; Wei, Yungang – IEEE Transactions on Learning Technologies, 2019
In recent years, computerized adaptive testing (CAT) has gained popularity as an important means to evaluate students' ability. Assigning tags to test questions is crucial in CAT. Manual tagging is widely used for constructing question banks; however, this approach is time-consuming and might lead to consistency issues. Automatic question tagging,…
Descriptors: Computer Assisted Testing, Student Evaluation, Test Items, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Soland, James – Applied Measurement in Education, 2018
This study estimated male-female and Black-White achievement gaps without accounting for low test motivation, then compared those estimates to ones that used several approaches to addressing rapid guessing. Researchers investigated two issues: (1) The differences in rates of rapid guessing across subgroups and (2) How much achievement gap…
Descriptors: Guessing (Tests), Achievement Gap, Student Motivation, Learner Engagement
Yasuda, Keiji; Kawashima, Hiroyuki; Hata, Yoko; Kimura, Hiroaki – International Association for Development of the Information Society, 2015
An adaptive learning system is proposed that incorporates a Bayesian network to efficiently gauge learners' understanding at the course-unit level. Also, learners receive content that is adapted to their measured level of understanding. The system works on an iPad via the Edmodo platform. A field experiment using the system in an elementary school…
Descriptors: Adaptive Testing, Bayesian Statistics, Networks, Computer Assisted Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Navarro, Juan-José; Mourgues-Codern, Catalina V. – Journal of Cognitive Education and Psychology, 2018
The development of novel educational assessment models founded on item response theory (IRT), as well as software tools designed to implement these models, has contributed to the surge in computerized adaptive tests (CATs). The distinguishing characteristic of CATs is that the sequence of items on a test progressively adapts to the performance…
Descriptors: Reading Processes, Computer Assisted Testing, Adaptive Testing, Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Robin, Frédéric; Bejar, Isaac; Liang, Longjuan; Rijmen, Frank – ETS Research Report Series, 2016
Exploratory and confirmatory factor analyses of domestic data from the" GRE"® revised General Test, introduced in 2011, were conducted separately for the verbal (VBL) and quantitative (QNT) reasoning measures to evaluate the unidimensionality and local independence assumptions required by item response theory (IRT). Results based on data…
Descriptors: College Entrance Examinations, Graduate Study, Verbal Tests, Mathematics Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kim, Sooyeon; Moses, Tim – ETS Research Report Series, 2016
The purpose of this study is to evaluate the extent to which item response theory (IRT) proficiency estimation methods are robust to the presence of aberrant responses under the "GRE"® General Test multistage adaptive testing (MST) design. To that end, a wide range of atypical response behaviors affecting as much as 10% of the test items…
Descriptors: Item Response Theory, Computation, Robustness (Statistics), Response Style (Tests)
Peer reviewed Peer reviewed
Direct linkDirect link
Wise, Steven L.; Kingsbury, G. Gage – Journal of Educational Measurement, 2016
This study examined the utility of response time-based analyses in understanding the behavior of unmotivated test takers. For the data from an adaptive achievement test, patterns of observed rapid-guessing behavior and item response accuracy were compared to the behavior expected under several types of models that have been proposed to represent…
Descriptors: Achievement Tests, Student Motivation, Test Wiseness, Adaptive Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Qian, Hong; Staniewska, Dorota; Reckase, Mark; Woo, Ada – Educational Measurement: Issues and Practice, 2016
This article addresses the issue of how to detect item preknowledge using item response time data in two computer-based large-scale licensure examinations. Item preknowledge is indicated by an unexpected short response time and a correct response. Two samples were used for detecting item preknowledge for each examination. The first sample was from…
Descriptors: Reaction Time, Licensing Examinations (Professions), Computer Assisted Testing, Prior Learning
Pages: 1  |  ...  |  11  |  12  |  13  |  14  |  15  |  16  |  17  |  18  |  19  |  ...  |  89