Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 4 |
Descriptor
Test Reliability | 4 |
Models | 2 |
Test Items | 2 |
Artificial Intelligence | 1 |
Cognitive Processes | 1 |
Cognitive Psychology | 1 |
College Students | 1 |
Computation | 1 |
Computer Assisted Testing | 1 |
Construct Validity | 1 |
Correlation | 1 |
More ▼ |
Source
Applied Psychological… | 4 |
Author
DeMars, Christine E. | 1 |
Embretson, Susan E. | 1 |
Funke, Joachim | 1 |
Gorin, Joanna S. | 1 |
Greiff, Samuel | 1 |
Lee, Won-Chan | 1 |
Wise, Steven L. | 1 |
Wustenberg, Sascha | 1 |
Publication Type
Journal Articles | 4 |
Reports - Descriptive | 4 |
Education Level
Higher Education | 1 |
Postsecondary Education | 1 |
Audience
Location
West Germany | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Graduate Record Examinations | 1 |
What Works Clearinghouse Rating
Greiff, Samuel; Wustenberg, Sascha; Funke, Joachim – Applied Psychological Measurement, 2012
This article addresses two unsolved measurement issues in dynamic problem solving (DPS) research: (a) unsystematic construction of DPS tests making a comparison of results obtained in different studies difficult and (b) use of time-intensive single tasks leading to severe reliability problems. To solve these issues, the MicroDYN approach is…
Descriptors: Problem Solving, Tests, Measurement, Structural Equation Models
Wise, Steven L.; DeMars, Christine E. – Applied Psychological Measurement, 2009
Attali (2005) recently demonstrated that Cronbach's coefficient [alpha] estimate of reliability for number-right multiple-choice tests will tend to be deflated by speededness, rather than inflated as is commonly believed and taught. Although the methods, findings, and conclusions of Attali (2005) are correct, his article may inadvertently invite a…
Descriptors: Guessing (Tests), Multiple Choice Tests, Test Reliability, Computation
Lee, Won-Chan – Applied Psychological Measurement, 2007
This article introduces a multinomial error model, which models an examinee's test scores obtained over repeated measurements of an assessment that consists of polytomously scored items. A compound multinomial error model is also introduced for situations in which items are stratified according to content categories and/or prespecified numbers of…
Descriptors: Simulation, Error of Measurement, Scoring, Test Items
Gorin, Joanna S.; Embretson, Susan E. – Applied Psychological Measurement, 2006
Recent assessment research joining cognitive psychology and psychometric theory has introduced a new technology, item generation. In algorithmic item generation, items are systematically created based on specific combinations of features that underlie the processing required to correctly solve a problem. Reading comprehension items have been more…
Descriptors: Difficulty Level, Test Items, Modeling (Psychology), Paragraph Composition