NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers3
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 117 results Save | Export
Jing Ma – ProQuest LLC, 2024
This study investigated the impact of scoring polytomous items later on measurement precision, classification accuracy, and test security in mixed-format adaptive testing. Utilizing the shadow test approach, a simulation study was conducted across various test designs, lengths, number and location of polytomous item. Results showed that while…
Descriptors: Scoring, Adaptive Testing, Test Items, Classification
Peer reviewed Peer reviewed
Direct linkDirect link
Yi-Jui I. Chen; Yi-Jhen Wu; Yi-Hsin Chen; Robin Irey – Journal of Psychoeducational Assessment, 2025
A short form of the 60-item computer-based orthographic processing assessment (long-form COPA or COPA-LF) was developed. The COPA-LF consists of five skills, including rapid perception, access, differentiation, correction, and arrangement. Thirty items from the COPA-LF were selected for the short-form COPA (COPA-SF) based on cognitive diagnostic…
Descriptors: Computer Assisted Testing, Test Length, Test Validity, Orthographic Symbols
Peer reviewed Peer reviewed
Direct linkDirect link
Lin, Yin; Brown, Anna; Williams, Paul – Educational and Psychological Measurement, 2023
Several forced-choice (FC) computerized adaptive tests (CATs) have emerged in the field of organizational psychology, all of them employing ideal-point items. However, despite most items developed historically follow dominance response models, research on FC CAT using dominance items is limited. Existing research is heavily dominated by…
Descriptors: Measurement Techniques, Computer Assisted Testing, Adaptive Testing, Industrial Psychology
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Biccard, Piera; Mudau, Patience Kelebogile; van den Berg, Geesje – Journal of Learning for Development, 2023
This article explores student perceptions of writing online examinations for the first time during the COVID-19 pandemic. Prior to the pandemic, examinations at an open and distance learning institution in South Africa were conducted as venue-based examinations. From March 2020, all examinations were moved online. Online examinations were…
Descriptors: Foreign Countries, Student Attitudes, Computer Assisted Testing, COVID-19
Peer reviewed Peer reviewed
Direct linkDirect link
Cheng, Ying; Shao, Can – Educational and Psychological Measurement, 2022
Computer-based and web-based testing have become increasingly popular in recent years. Their popularity has dramatically expanded the availability of response time data. Compared to the conventional item response data that are often dichotomous or polytomous, response time has the advantage of being continuous and can be collected in an…
Descriptors: Reaction Time, Test Wiseness, Computer Assisted Testing, Simulation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Erdem-Kara, Basak; Dogan, Nuri – International Journal of Assessment Tools in Education, 2022
Recently, adaptive test approaches have become a viable alternative to traditional fixed-item tests. The main advantage of adaptive tests is that they reach desired measurement precision with fewer items. However, fewer items mean that each item has a more significant effect on ability estimation and therefore those tests are open to more…
Descriptors: Item Analysis, Computer Assisted Testing, Test Items, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Ying Xu; Xiaodong Li; Jin Chen – Language Testing, 2025
This article provides a detailed review of the Computer-based English Listening Speaking Test (CELST) used in Guangdong, China, as part of the National Matriculation English Test (NMET) to assess students' English proficiency. The CELST measures listening and speaking skills as outlined in the "English Curriculum for Senior Middle…
Descriptors: Computer Assisted Testing, English (Second Language), Language Tests, Listening Comprehension Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Nikola Ebenbeck; Markus Gebhardt – Journal of Special Education Technology, 2024
Technologies that enable individualization for students have significant potential in special education. Computerized Adaptive Testing (CAT) refers to digital assessments that automatically adjust their difficulty level based on students' abilities, allowing for personalized, efficient, and accurate measurement. This article examines whether CAT…
Descriptors: Computer Assisted Testing, Students with Disabilities, Special Education, Grade 3
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Timothy S. Faith – Teaching and Learning Excellence through Scholarship, 2024
This study compared traditional methods of college-level instruction, including lecture and class discussion followed by assessment via course content exams, with a variety of other instructional techniques. The intent was to evaluate whether more contemporary instructional techniques are significantly correlated with improved average exam scores…
Descriptors: Community College Students, Business Administration Education, Teaching Methods, Alternative Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Wyse, Adam E.; McBride, James R. – Measurement: Interdisciplinary Research and Perspectives, 2022
A common practical challenge is how to assign ability estimates to all incorrect and all correct response patterns when using item response theory (IRT) models and maximum likelihood estimation (MLE) since ability estimates for these types of responses equal -8 or +8. This article uses a simulation study and data from an operational K-12…
Descriptors: Scores, Adaptive Testing, Computer Assisted Testing, Test Length
Peer reviewed Peer reviewed
Direct linkDirect link
Kárász, Judit T.; Széll, Krisztián; Takács, Szabolcs – Quality Assurance in Education: An International Perspective, 2023
Purpose: Based on the general formula, which depends on the length and difficulty of the test, the number of respondents and the number of ability levels, this study aims to provide a closed formula for the adaptive tests with medium difficulty (probability of solution is p = 1/2) to determine the accuracy of the parameters for each item and in…
Descriptors: Test Length, Probability, Comparative Analysis, Difficulty Level
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sari, Halil Ibrahim – International Journal of Psychology and Educational Studies, 2020
Due to low cost monte-carlo (MC) simulations have been extensively conducted in the area of educational measurement. However, the results derived from MC studies may not always be generalizable to operational studies. The purpose of this study was to provide a methodological discussion on the other different types of simulation methods, and run…
Descriptors: Computer Assisted Testing, Adaptive Testing, Simulation, Test Length
Peer reviewed Peer reviewed
Direct linkDirect link
Yasuda, Jun-ichiro; Hull, Michael M.; Mae, Naohiro – Physical Review Physics Education Research, 2022
This paper presents improvements made to a computerized adaptive testing (CAT)-based version of the FCI (FCI-CAT) in regards to test security and test efficiency. First, we will discuss measures to enhance test security by controlling for item overexposure, decreasing the risk that respondents may (i) memorize the content of a pretest for use on…
Descriptors: Adaptive Testing, Computer Assisted Testing, Test Items, Risk Management
Benton, Tom – Research Matters, 2021
Computer adaptive testing is intended to make assessment more reliable by tailoring the difficulty of the questions a student has to answer to their level of ability. Most commonly, this benefit is used to justify the length of tests being shortened whilst retaining the reliability of a longer, non-adaptive test. Improvements due to adaptive…
Descriptors: Risk, Item Response Theory, Computer Assisted Testing, Difficulty Level
Peer reviewed Peer reviewed
Direct linkDirect link
Hula, William D.; Fergadiotis, Gerasimos; Swiderski, Alexander M.; Silkes, JoAnn P.; Kellough, Stacey – Journal of Speech, Language, and Hearing Research, 2020
Purpose: The purpose of this study was to verify the equivalence of 2 alternate test forms with nonoverlapping content generated by an item response theory (IRT)--based computer-adaptive test (CAT). The Philadelphia Naming Test (PNT; Roach, Schwartz, Martin, Grewal, & Brecher, 1996) was utilized as an item bank in a prospective, independent…
Descriptors: Adaptive Testing, Computer Assisted Testing, Severity (of Disability), Aphasia
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8