NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)9
Since 2006 (last 20 years)32
Audience
Researchers3
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 64 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Cappaert, Kevin J.; Wen, Yao; Chang, Yu-Feng – Measurement: Interdisciplinary Research and Perspectives, 2018
Events such as curriculum changes or practice effects can lead to item parameter drift (IPD) in computer adaptive testing (CAT). The current investigation introduced a point- and weight-adjusted D[superscript 2] method for IPD detection for use in a CAT environment when items are suspected of drifting across test administrations. Type I error and…
Descriptors: Adaptive Testing, Computer Assisted Testing, Test Items, Identification
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Xi; Liu, Yang; Robin, Frederic; Guo, Hongwen – International Journal of Testing, 2019
In an on-demand testing program, some items are repeatedly used across test administrations. This poses a risk to test security. In this study, we considered a scenario wherein a test was divided into two subsets: one consisting of secure items and the other consisting of possibly compromised items. In a simulation study of multistage adaptive…
Descriptors: Identification, Methods, Test Items, Cheating
Peer reviewed Peer reviewed
Direct linkDirect link
Sinharay, Sandip – Journal of Educational and Behavioral Statistics, 2016
Meijer and van Krimpen-Stoop noted that the number of person-fit statistics (PFSs) that have been designed for computerized adaptive tests (CATs) is relatively modest. This article partially addresses that concern by suggesting three new PFSs for CATs. The statistics are based on tests for a change point and can be used to detect an abrupt change…
Descriptors: Computer Assisted Testing, Adaptive Testing, Item Response Theory, Goodness of Fit
Peer reviewed Peer reviewed
Direct linkDirect link
Wu, Huey-Min; Kuo, Bor-Chen; Wang, Su-Chen – Educational Technology & Society, 2017
In this study, a computerized dynamic assessment test with both immediately individualized feedback and adaptively property was applied to Mathematics learning in primary school. For evaluating the effectiveness of the computerized dynamic adaptive test, the performances of three types of remedial instructions were compared by a pre-test/post-test…
Descriptors: Adaptive Testing, Feedback (Response), Elementary School Mathematics, Foreign Countries
Fomenko, Julie Ann Schwein – ProQuest LLC, 2017
Twenty-first-century healthcare is a complex and demanding arena. Today's hospital environment is more complex than in previous years while patients move through the system at a much faster pace. Newly graduated nurses are challenged in their first year with the healthcare needs of complex patients. Nurse educators and nurse leaders differ in…
Descriptors: Simulation, Nurses, Nursing Education, Competence
Peer reviewed Peer reviewed
Direct linkDirect link
Cheng, Ying; Patton, Jeffrey M.; Shao, Can – Educational and Psychological Measurement, 2015
a-Stratified computerized adaptive testing with b-blocking (AST), as an alternative to the widely used maximum Fisher information (MFI) item selection method, can effectively balance item pool usage while providing accurate latent trait estimates in computerized adaptive testing (CAT). However, previous comparisons of these methods have treated…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Item Banks
Peer reviewed Peer reviewed
Direct linkDirect link
Leroux, Audrey J.; Lopez, Myriam; Hembry, Ian; Dodd, Barbara G. – Educational and Psychological Measurement, 2013
This study compares the progressive-restricted standard error (PR-SE) exposure control procedure to three commonly used procedures in computerized adaptive testing, the randomesque, Sympson-Hetter (SH), and no exposure control methods. The performance of these four procedures is evaluated using the three-parameter logistic model under the…
Descriptors: Computer Assisted Testing, Adaptive Testing, Comparative Analysis, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
He, Lianzhen; Min, Shangchao – Language Assessment Quarterly, 2017
The first aim of this study was to develop a computer adaptive EFL test (CALT) that assesses test takers' listening and reading proficiency in English with dichotomous items and polytomous testlets. We reported in detail on the development of the CALT, including item banking, determination of suitable item response theory (IRT) models for item…
Descriptors: Computer Assisted Testing, Adaptive Testing, English (Second Language), Second Language Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Markon, Kristian E. – Psychological Methods, 2013
Although advances have improved our ability to describe the measurement precision of a test, it often remains challenging to summarize how well a test is performing overall. Reliability, for example, provides an overall summary of measurement precision, but it is sample-specific and might not reflect the potential usefulness of a test if the…
Descriptors: Measures (Individuals), Psychometrics, Statistical Analysis, Bayesian Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Foorman, Barbara R.; Petscher, Yaacov; Stanley, Christopher; Truckenmiller, Adrea – Journal of Research on Educational Effectiveness, 2017
The objective of this study was to determine the latent profiles of reading and language skills that characterized 7,752 students in kindergarten through tenth grade and to relate the profiles to norm-referenced reading outcomes. Reading and language skills were assessed with a computer-adaptive assessment administered in the middle of the year…
Descriptors: Reading Comprehension, Statistical Analysis, Computer Assisted Testing, Adaptive Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kim, Sooyeon; Livingston, Samuel A. – ETS Research Report Series, 2017
The purpose of this simulation study was to assess the accuracy of a classical test theory (CTT)-based procedure for estimating the alternate-forms reliability of scores on a multistage test (MST) having 3 stages. We generated item difficulty and discrimination parameters for 10 parallel, nonoverlapping forms of the complete 3-stage test and…
Descriptors: Accuracy, Test Theory, Test Reliability, Adaptive Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Belov, Dmitry I. – Journal of Educational Measurement, 2013
The development of statistical methods for detecting test collusion is a new research direction in the area of test security. Test collusion may be described as large-scale sharing of test materials, including answers to test items. Current methods of detecting test collusion are based on statistics also used in answer-copying detection.…
Descriptors: Cheating, Computer Assisted Testing, Adaptive Testing, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Sooyeon; Moses, Tim; Yoo, Hanwook – Journal of Educational Measurement, 2015
This inquiry is an investigation of item response theory (IRT) proficiency estimators' accuracy under multistage testing (MST). We chose a two-stage MST design that includes four modules (one at Stage 1, three at Stage 2) and three difficulty paths (low, middle, high). We assembled various two-stage MST panels (i.e., forms) by manipulating two…
Descriptors: Comparative Analysis, Item Response Theory, Computation, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Veldkamp, Bernard P.; Matteucci, Mariagiulia; de Jong, Martijn G. – Applied Psychological Measurement, 2013
Item response theory parameters have to be estimated, and because of the estimation process, they do have uncertainty in them. In most large-scale testing programs, the parameters are stored in item banks, and automated test assembly algorithms are applied to assemble operational test forms. These algorithms treat item parameters as fixed values,…
Descriptors: Test Construction, Test Items, Item Banks, Automation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ali, Usama S.; Chang, Hua-Hua – ETS Research Report Series, 2014
Adaptive testing is advantageous in that it provides more efficient ability estimates with fewer items than linear testing does. Item-driven adaptive pretesting may also offer similar advantages, and verification of such a hypothesis about item calibration was the main objective of this study. A suitability index (SI) was introduced to adaptively…
Descriptors: Adaptive Testing, Simulation, Pretests Posttests, Test Items
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5