NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 8 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Pusic, Martin – Educational and Psychological Measurement, 2023
This note is concerned with evaluation of location parameters for polytomous items in multiple-component measuring instruments. A point and interval estimation procedure for these parameters is outlined that is developed within the framework of latent variable modeling. The method permits educational, behavioral, biomedical, and marketing…
Descriptors: Item Analysis, Measurement Techniques, Computer Software, Intervals
Peer reviewed Peer reviewed
Direct linkDirect link
Nagy, Gabriel; Ulitzsch, Esther – Educational and Psychological Measurement, 2022
Disengaged item responses pose a threat to the validity of the results provided by large-scale assessments. Several procedures for identifying disengaged responses on the basis of observed response times have been suggested, and item response theory (IRT) models for response engagement have been proposed. We outline that response time-based…
Descriptors: Item Response Theory, Hierarchical Linear Modeling, Predictor Variables, Classification
Peer reviewed Peer reviewed
Direct linkDirect link
von Davier, Matthias; Tyack, Lillian; Khorramdel, Lale – Educational and Psychological Measurement, 2023
Automated scoring of free drawings or images as responses has yet to be used in large-scale assessments of student achievement. In this study, we propose artificial neural networks to classify these types of graphical responses from a TIMSS 2019 item. We are comparing classification accuracy of convolutional and feed-forward approaches. Our…
Descriptors: Scoring, Networks, Artificial Intelligence, Elementary Secondary Education
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Marcoulides, George A. – Educational and Psychological Measurement, 2018
This article outlines a procedure for examining the degree to which a common factor may be dominating additional factors in a multicomponent measuring instrument consisting of binary items. The procedure rests on an application of the latent variable modeling methodology and accounts for the discrete nature of the manifest indicators. The method…
Descriptors: Measurement Techniques, Factor Analysis, Item Response Theory, Likert Scales
Peer reviewed Peer reviewed
Downey, Ronald G.; Stockdale, Margaret S. – Educational and Psychological Measurement, 1987
Lord's method for detecting subgroup bias at the item level uses a three-parameter item characteristic curve model. A chi square statistic is computed on the multivariate differences between the parameter estimates of item discrimination and difficulty. The LOGIST program and additional programs written in BASIC are used. (Author/GDC)
Descriptors: Computer Software, Item Analysis, Latent Trait Theory, Statistical Bias
Peer reviewed Peer reviewed
Luecht, Richard M. – Educational and Psychological Measurement, 1987
Test Pac, a test scoring and analysis computer program for moderate-sized sample designs using dichotomous response items, performs comprehensive item analyses and multiple reliability estimates. It also performs single-facet generalizability analysis of variance, single-parameter item response theory analyses, test score reporting, and computer…
Descriptors: Computer Assisted Testing, Computer Software, Computer Software Reviews, Item Analysis
Peer reviewed Peer reviewed
Aiken, Lewis R. – Educational and Psychological Measurement, 1985
Three numerical coefficients for analyzing the validity and reliability of ratings are described. Each coefficient is computed as the ratio of an obtained to a maximum sum of differences in ratings. The coefficients are also applicable to the item analysis, agreement analysis, and cluster or factor analysis of rating-scale data. (Author/BW)
Descriptors: Computer Software, Data Analysis, Factor Analysis, Item Analysis
Peer reviewed Peer reviewed
Aiken, Lewis R. – Educational and Psychological Measurement, 1989
Two alternatives to traditional item analysis and reliability estimation procedures are considered for determining the difficulty, discrimination, and reliability of optional items on essay and other tests. A computer program to compute these measures is described, and illustrations are given. (SLD)
Descriptors: College Entrance Examinations, Computer Software, Difficulty Level, Essay Tests