Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 3 |
Since 2016 (last 10 years) | 3 |
Since 2006 (last 20 years) | 9 |
Descriptor
Source
Educational and Psychological… | 9 |
Author
Bolt, Daniel M. | 1 |
Cai, Li | 1 |
Cheong, Yuk Fai | 1 |
Chi, Eunlim | 1 |
Engelhard, George, Jr. | 1 |
Finch, W. Holmes | 1 |
Gräfe, Linda | 1 |
Hernández Finch, Maria E. | 1 |
Huang, Hung-Yu | 1 |
Khorramdel, Lale | 1 |
Kim, Nana | 1 |
More ▼ |
Publication Type
Journal Articles | 9 |
Reports - Research | 6 |
Reports - Evaluative | 2 |
Reports - Descriptive | 1 |
Education Level
Elementary Education | 4 |
Elementary Secondary Education | 3 |
Grade 7 | 2 |
Intermediate Grades | 2 |
Middle Schools | 2 |
Early Childhood Education | 1 |
Grade 3 | 1 |
Grade 4 | 1 |
Grade 5 | 1 |
Higher Education | 1 |
Junior High Schools | 1 |
More ▼ |
Audience
Location
Georgia | 1 |
Germany | 1 |
South Korea | 1 |
Taiwan | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Trends in International… | 4 |
Georgia Criterion Referenced… | 1 |
SAT (College Admission Test) | 1 |
Students Evaluation of… | 1 |
What Works Clearinghouse Rating
Lawrence T. DeCarlo – Educational and Psychological Measurement, 2024
A psychological framework for different types of items commonly used with mixed-format exams is proposed. A choice model based on signal detection theory (SDT) is used for multiple-choice (MC) items, whereas an item response theory (IRT) model is used for open-ended (OE) items. The SDT and IRT models are shown to share a common conceptualization…
Descriptors: Test Format, Multiple Choice Tests, Item Response Theory, Models
von Davier, Matthias; Tyack, Lillian; Khorramdel, Lale – Educational and Psychological Measurement, 2023
Automated scoring of free drawings or images as responses has yet to be used in large-scale assessments of student achievement. In this study, we propose artificial neural networks to classify these types of graphical responses from a TIMSS 2019 item. We are comparing classification accuracy of convolutional and feed-forward approaches. Our…
Descriptors: Scoring, Networks, Artificial Intelligence, Elementary Secondary Education
Kim, Nana; Bolt, Daniel M. – Educational and Psychological Measurement, 2021
This paper presents a mixture item response tree (IRTree) model for extreme response style. Unlike traditional applications of single IRTree models, a mixture approach provides a way of representing the mixture of respondents following different underlying response processes (between individuals), as well as the uncertainty present at the…
Descriptors: Item Response Theory, Response Style (Tests), Models, Test Items
Pohl, Steffi; Gräfe, Linda; Rose, Norman – Educational and Psychological Measurement, 2014
Data from competence tests usually show a number of missing responses on test items due to both omitted and not-reached items. Different approaches for dealing with missing responses exist, and there are no clear guidelines on which of those to use. While classical approaches rely on an ignorable missing data mechanism, the most recently developed…
Descriptors: Test Items, Achievement Tests, Item Response Theory, Models
Paek, Insu; Park, Hyun-Jeong; Cai, Li; Chi, Eunlim – Educational and Psychological Measurement, 2014
Typically a longitudinal growth modeling based on item response theory (IRT) requires repeated measures data from a single group with the same test design. If operational or item exposure problems are present, the same test may not be employed to collect data for longitudinal analyses and tests at multiple time points are constructed with unique…
Descriptors: Item Response Theory, Comparative Analysis, Test Items, Equated Scores
Finch, W. Holmes; Hernández Finch, Maria E. – Educational and Psychological Measurement, 2013
The assessment of test data for the presence of differential item functioning (DIF) is a key component of instrument development and validation. Among the many methods that have been used successfully in such analyses is the mixture modeling approach. Using this approach to identify the presence of DIF has been touted as potentially superior for…
Descriptors: Learning Disabilities, Testing Accommodations, Test Bias, Item Response Theory
Wolkowitz, Amanda A.; Skorupski, William P. – Educational and Psychological Measurement, 2013
When missing values are present in item response data, there are a number of ways one might impute a correct or incorrect response to a multiple-choice item. There are significantly fewer methods for imputing the actual response option an examinee may have provided if he or she had not omitted the item either purposely or accidentally. This…
Descriptors: Multiple Choice Tests, Statistical Analysis, Models, Accuracy
Randall, Jennifer; Cheong, Yuk Fai; Engelhard, George, Jr. – Educational and Psychological Measurement, 2011
To address whether or not modifications in test administration influence item functioning for students with disabilities on a high-stakes statewide problem-solving assessment, a sample of 868 students (with and without disabilities) from 74 Georgia schools were randomly assigned to one of three testing conditions (resource guide, calculator, or…
Descriptors: Item Response Theory, Models, Context Effect, Test Bias
Huang, Hung-Yu; Wang, Wen-Chung – Educational and Psychological Measurement, 2014
In the social sciences, latent traits often have a hierarchical structure, and data can be sampled from multiple levels. Both hierarchical latent traits and multilevel data can occur simultaneously. In this study, we developed a general class of item response theory models to accommodate both hierarchical latent traits and multilevel data. The…
Descriptors: Item Response Theory, Hierarchical Linear Modeling, Computation, Test Reliability