Publication Date
In 2025 | 1 |
Since 2024 | 5 |
Since 2021 (last 5 years) | 6 |
Since 2016 (last 10 years) | 6 |
Since 2006 (last 20 years) | 6 |
Descriptor
Algorithms | 17 |
Test Construction | 7 |
Test Items | 7 |
Achievement Tests | 6 |
Scoring | 6 |
Simulation | 5 |
Computer Assisted Testing | 4 |
Models | 4 |
Multidimensional Scaling | 4 |
Evaluation Methods | 3 |
Item Response Theory | 3 |
More ▼ |
Source
Journal of Educational… | 17 |
Author
Birenbaum, Menucha | 2 |
Clauser, Brian E. | 2 |
Tatsuoka, Kikumi K. | 2 |
Adema, Jos J. | 1 |
Carl F. Falk | 1 |
Clyman, Stephen G. | 1 |
Emre Gonulates | 1 |
Fatsuoka, Kikumi K. | 1 |
Frank Miller | 1 |
Gongjun Xu | 1 |
Green, Bert F. | 1 |
More ▼ |
Publication Type
Journal Articles | 17 |
Reports - Research | 11 |
Reports - Evaluative | 4 |
Reports - Descriptive | 2 |
Speeches/Meeting Papers | 1 |
Education Level
Higher Education | 1 |
Postsecondary Education | 1 |
Secondary Education | 1 |
Audience
Researchers | 1 |
Location
Laws, Policies, & Programs
Assessments and Surveys
Program for International… | 1 |
What Works Clearinghouse Rating
Mahmood Ul Hassan; Frank Miller – Journal of Educational Measurement, 2024
Multidimensional achievement tests are recently gaining more importance in educational and psychological measurements. For example, multidimensional diagnostic tests can help students to determine which particular domain of knowledge they need to improve for better performance. To estimate the characteristics of candidate items (calibration) for…
Descriptors: Multidimensional Scaling, Achievement Tests, Test Items, Test Construction
Wenchao Ma; Miguel A. Sorrel; Xiaoming Zhai; Yuan Ge – Journal of Educational Measurement, 2024
Most existing diagnostic models are developed to detect whether students have mastered a set of skills of interest, but few have focused on identifying what scientific misconceptions students possess. This article developed a general dual-purpose model for simultaneously estimating students' overall ability and the presence and absence of…
Descriptors: Models, Misconceptions, Diagnostic Tests, Ability
Harold Doran; Testsuhiro Yamada; Ted Diaz; Emre Gonulates; Vanessa Culver – Journal of Educational Measurement, 2025
Computer adaptive testing (CAT) is an increasingly common mode of test administration offering improved test security, better measurement precision, and the potential for shorter testing experiences. This article presents a new item selection algorithm based on a generalized objective function to support multiple types of testing conditions and…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Algorithms
Yamaguchi, Kazuhiro; Zhang, Jihong – Journal of Educational Measurement, 2023
This study proposed Gibbs sampling algorithms for variable selection in a latent regression model under a unidimensional two-parameter logistic item response theory model. Three types of shrinkage priors were employed to obtain shrinkage estimates: double-exponential (i.e., Laplace), horseshoe, and horseshoe+ priors. These shrinkage priors were…
Descriptors: Algorithms, Simulation, Mathematics Achievement, Bayesian Statistics
Jia Liu; Xiangbin Meng; Gongjun Xu; Wei Gao; Ningzhong Shi – Journal of Educational Measurement, 2024
In this paper, we develop a mixed stochastic approximation expectation-maximization (MSAEM) algorithm coupled with a Gibbs sampler to compute the marginalized maximum a posteriori estimate (MMAPE) of a confirmatory multidimensional four-parameter normal ogive (M4PNO) model. The proposed MSAEM algorithm not only has the computational advantages of…
Descriptors: Algorithms, Achievement Tests, Foreign Countries, International Assessment
Sijia Huang; Seungwon Chung; Carl F. Falk – Journal of Educational Measurement, 2024
In this study, we introduced a cross-classified multidimensional nominal response model (CC-MNRM) to account for various response styles (RS) in the presence of cross-classified data. The proposed model allows slopes to vary across items and can explore impacts of observed covariates on latent constructs. We applied a recently developed variant of…
Descriptors: Response Style (Tests), Classification, Data, Models

van der Linden, Wim J.; Adema, Jos J. – Journal of Educational Measurement, 1998
Proposes an algorithm for the assembly of multiple test forms in which the multiple-form problem is reduced to a series of computationally less intensive two-form problems. Illustrates how the method can be implemented using 0-1 linear programming and gives two examples. (SLD)
Descriptors: Algorithms, Linear Programming, Test Construction, Test Format

Millman, Jason; Westman, Ronald S. – Journal of Educational Measurement, 1989
Five approaches to writing test items with computer assistance are described. A model of knowledge using a set of structures and a system for implementing the scheme are outlined. The approaches include the author-supplied approach, replacement-set procedures, computer-supplied prototype items, subject-matter mapping, and discourse analysis. (TJH)
Descriptors: Achievement Tests, Algorithms, Computer Assisted Testing, Test Construction

Van Der Flier, Henk; And Others – Journal of Educational Measurement, 1984
Two strategies for assessing item bias are discussed: methods comparing item difficulties unconditional on ability and methods comparing probabilities of response conditional on ability. Results suggest that the iterative logit method is an improvement on the noniterative one and is efficient in detecting biased and unbiased items. (Author/DWH)
Descriptors: Algorithms, Evaluation Methods, Item Analysis, Scores

Schnipke, Deborah L.; Green, Bert F. – Journal of Educational Measurement, 1995
Two item selection algorithms, one based on maximal differentiation between examinees and one based on item response theory and maximum information for each examinee, were compared in simulated linear and adaptive tests of cognitive ability. Adaptive tests based on maximum information were clearly superior. (SLD)
Descriptors: Adaptive Testing, Algorithms, Comparative Analysis, Item Response Theory

Tatsuoka, Kikumi K.; Tatsuoka, Maurice M. – Journal of Educational Measurement, 1983
This study introduces the individual consistency index (ICI), which measures the extent to which patterns of responses to parallel sets of items remain consistent over time. ICI is used as an error diagnostic tool to detect aberrant response patterns resulting from the consistent application of erroneous rules of operation. (Author/PN)
Descriptors: Achievement Tests, Algorithms, Error Patterns, Measurement Techniques

Clauser, Brian E.; Margolis, Melissa J.; Clyman, Stephen G.; Ross, Linette P. – Journal of Educational Measurement, 1997
Research on automated scoring is extended by comparing alternative automated systems for scoring a computer simulation of physicians' patient management skills. A regression-based system is more highly correlated with experts' evaluations than a system that uses complex rules to map performances into score levels, but both approaches are feasible.…
Descriptors: Algorithms, Automation, Comparative Analysis, Computer Assisted Testing

Birenbaum, Menucha; Fatsuoka, Kikumi K. – Journal of Educational Measurement, 1983
The outcomes of two scoring methods (one based on an error analysis and the second on a conventional method) on free-response tests, compared in terms of reliability and dimensionality, indicates the conventional method is inferior in both aspects. (Author/PN)
Descriptors: Achievement Tests, Algorithms, Data, Junior High Schools

Tatsuoka, Kikumi K. – Journal of Educational Measurement, 1983
A newly introduced approach, rule space, can represent large numbers of erroneous rules of arithmetic operations quantitatively and can predict the likelihood of each erroneous rule. The new model challenges the credibility of the traditional right-or-wrong scoring procedure. (Author/PN)
Descriptors: Addition, Algorithms, Arithmetic, Diagnostic Tests

Wainer, Howard; Lewis, Charles – Journal of Educational Measurement, 1990
Three different applications of the testlet concept are presented, and the psychometric models most suitable for each application are described. Difficulties that testlets can help overcome include (1) context effects; (2) item ordering; and (3) content balancing. Implications for test construction are discussed. (SLD)
Descriptors: Algorithms, Computer Assisted Testing, Elementary Secondary Education, Item Response Theory
Previous Page | Next Page ยป
Pages: 1 | 2