NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 19 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Achmad Rante Suparman; Eli Rohaeti; Sri Wening – Journal on Efficiency and Responsibility in Education and Science, 2024
This study focuses on developing a five-tier chemical diagnostic test based on a computer-based test with 11 assessment categories with an assessment score from 0 to 10. A total of 20 items produced were validated by education experts, material experts, measurement experts, and media experts, and an average index of the Aiken test > 0.70 was…
Descriptors: Chemistry, Diagnostic Tests, Computer Assisted Testing, Credits
Peer reviewed Peer reviewed
Direct linkDirect link
Stefanie A. Wind; Beyza Aksu-Dunya – Applied Measurement in Education, 2024
Careless responding is a pervasive concern in research using affective surveys. Although researchers have considered various methods for identifying careless responses, studies are limited that consider the utility of these methods in the context of computer adaptive testing (CAT) for affective scales. Using a simulation study informed by recent…
Descriptors: Response Style (Tests), Computer Assisted Testing, Adaptive Testing, Affective Measures
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Carol Eckerly; Yue Jia; Paul Jewsbury – ETS Research Report Series, 2022
Testing programs have explored the use of technology-enhanced items alongside traditional item types (e.g., multiple-choice and constructed-response items) as measurement evidence of latent constructs modeled with item response theory (IRT). In this report, we discuss considerations in applying IRT models to a particular type of adaptive testlet…
Descriptors: Computer Assisted Testing, Test Items, Item Response Theory, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Betts, Joe; Muntean, William; Kim, Doyoung; Kao, Shu-chuan – Educational and Psychological Measurement, 2022
The multiple response structure can underlie several different technology-enhanced item types. With the increased use of computer-based testing, multiple response items are becoming more common. This response type holds the potential for being scored polytomously for partial credit. However, there are several possible methods for computing raw…
Descriptors: Scoring, Test Items, Test Format, Raw Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Nordenswan, Elisabeth; Kataja, Eeva-Leena; Deater-Deckard, Kirby; Korja, Riikka; Karrasch, Mira; Laine, Matti; Karlsson, Linnea; Karlsson, Hasse – SAGE Open, 2020
This study tested whether executive functioning (EF)/learning tasks from the CogState computerized test battery show a unitary latent structure. This information is important for the construction of composite measures on these tasks for applied research purposes. Based on earlier factor analytic research, we identified five CogState tasks that…
Descriptors: Executive Function, Cognitive Tests, Test Items, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Yi, Yeon-Sook – Language Testing, 2017
The present study examines the relative importance of attributes within and across items by applying four cognitive diagnostic assessment models. The current study utilizes the function of the models that can indicate inter-attribute relationships that reflect the response behaviors of examinees to analyze scored test-taker responses to four forms…
Descriptors: Second Language Learning, Reading Comprehension, Listening Comprehension, Language Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Thissen, David – Journal of Educational and Behavioral Statistics, 2016
David Thissen, a professor in the Department of Psychology and Neuroscience, Quantitative Program at the University of North Carolina, has consulted and served on technical advisory committees for assessment programs that use item response theory (IRT) over the past couple decades. He has come to the conclusion that there are usually two purposes…
Descriptors: Item Response Theory, Test Construction, Testing Problems, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Chun; Fan, Zhewen; Chang, Hua-Hua; Douglas, Jeffrey A. – Journal of Educational and Behavioral Statistics, 2013
The item response times (RTs) collected from computerized testing represent an underutilized type of information about items and examinees. In addition to knowing the examinees' responses to each item, we can investigate the amount of time examinees spend on each item. Current models for RTs mainly focus on parametric models, which have the…
Descriptors: Reaction Time, Computer Assisted Testing, Test Items, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Johnson, Philip; Tymms, Peter – Journal of Research in Science Teaching, 2011
Previously, a small scale, interview-based, 3-year longitudinal study (ages 11-14) in one school had suggested a learning progression related to the concept of a substance. This article presents the results of a large-scale, cross-sectional study which used Rasch modeling to test the hypothesis of the learning progression. Data were collected from…
Descriptors: Computer Assisted Testing, Chemistry, Measures (Individuals), Foreign Countries
Nering, Michael L., Ed.; Ostini, Remo, Ed. – Routledge, Taylor & Francis Group, 2010
This comprehensive "Handbook" focuses on the most used polytomous item response theory (IRT) models. These models help us understand the interaction between examinees and test questions where the questions have various response categories. The book reviews all of the major models and includes discussions about how and where the models…
Descriptors: Guides, Item Response Theory, Test Items, Correlation
van Krimpen-Stoop, Edith M. L. A.; Meijer, Rob R. – 2000
Item scores that do not fit an assumed item response theory model may cause the latent trait value to be estimated inaccurately. For computerized adaptive tests (CAT) with dichotomous items, several person-fit statistics for detecting nonfitting item score patterns have been proposed. Both for paper-and-pencil (P&P) test and CATs, detection of…
Descriptors: Adaptive Testing, Computer Assisted Testing, Goodness of Fit, Item Response Theory
Peer reviewed Peer reviewed
Reise, Steven P. – Applied Psychological Measurement, 2001
This book contains a series of research articles about computerized adaptive testing (CAT) written for advanced psychometricians. The book is divided into sections on: (1) item selection and examinee scoring in CAT; (2) examples of CAT applications; (3) item banks; (4) determining model fit; and (5) using testlets in CAT. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Goodness of Fit, Item Banks
Peer reviewed Peer reviewed
Luecht, Richard M.; Hirsch, Thomas M. – Applied Psychological Measurement, 1992
Derivations of several item selection algorithms for use in fitting test items to target information functions (IFs) are described. These algorithms, which use an average growth approximation of target IFs, were tested by generating six test forms and were found to provide reliable fit. (SLD)
Descriptors: Algorithms, Computer Assisted Testing, Equations (Mathematics), Goodness of Fit
Glas, C. A. W. – 2001
In a previous study (1998), how to evaluate whether adaptive testing data used for online calibration sufficiently fit the item response model used by C. Glas was studied. Three approaches were suggested, based on a Lagrange multiplier (LM) statistic, a Wald statistic, and a cumulative sum (CUMSUM) statistic respectively. For all these methods,…
Descriptors: Adaptive Testing, Computer Assisted Testing, Error of Measurement, Estimation (Mathematics)
De Ayala, R. J.; And Others – 1991
The robustness of a partial credit (PC) model-based computerized adaptive test's (CAT's) ability estimation to items that did not fit the PC model was investigated. A CAT program was written based on the PC model. The program used maximum likelihood estimation of ability. Item selection was on the basis of information. The simulation terminated…
Descriptors: Adaptive Testing, Computer Assisted Testing, Equations (Mathematics), Error of Measurement
Previous Page | Next Page ยป
Pages: 1  |  2