Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 15 |
Descriptor
Source
Educational and Psychological… | 27 |
Author
Dodd, Barbara G. | 5 |
Jiao, Hong | 3 |
Brooks, Thomas | 2 |
Chen, Ssu-Kuang | 2 |
De Ayala, R. J. | 2 |
Koch, William R. | 2 |
Olson, John | 2 |
Wang, Shudong | 2 |
Young, Michael J. | 2 |
Aguinis, Herman | 1 |
Breithaupt, Krista | 1 |
More ▼ |
Publication Type
Journal Articles | 27 |
Reports - Evaluative | 27 |
Book/Product Reviews | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Elementary Secondary Education | 2 |
High Schools | 2 |
Adult Education | 1 |
Middle Schools | 1 |
Secondary Education | 1 |
Audience
Location
Germany | 1 |
Netherlands | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Program for International… | 1 |
What Works Clearinghouse Rating
Feuerstahler, Leah M.; Waller, Niels; MacDonald, Angus, III – Educational and Psychological Measurement, 2020
Although item response models have grown in popularity in many areas of educational and psychological assessment, there are relatively few applications of these models in experimental psychopathology. In this article, we explore the use of item response models in the context of a computerized cognitive task designed to assess visual working memory…
Descriptors: Item Response Theory, Psychopathology, Intelligence Tests, Psychological Evaluation
Lin, Chuan-Ju – Educational and Psychological Measurement, 2011
This study compares four item selection criteria for a two-category computerized classification testing: (1) Fisher information (FI), (2) Kullback-Leibler information (KLI), (3) weighted log-odds ratio (WLOR), and (4) mutual information (MI), with respect to the efficiency and accuracy of classification decision using the sequential probability…
Descriptors: Computer Assisted Testing, Adaptive Testing, Selection, Test Items
Jiao, Hong; Liu, Junhui; Haynie, Kathleen; Woo, Ada; Gorham, Jerry – Educational and Psychological Measurement, 2012
This study explored the impact of partial credit scoring of one type of innovative items (multiple-response items) in a computerized adaptive version of a large-scale licensure pretest and operational test settings. The impacts of partial credit scoring on the estimation of the ability parameters and classification decisions in operational test…
Descriptors: Test Items, Computer Assisted Testing, Measures (Individuals), Scoring
Choi, Seung W.; Grady, Matthew W.; Dodd, Barbara G. – Educational and Psychological Measurement, 2011
The goal of the current study was to introduce a new stopping rule for computerized adaptive testing (CAT). The predicted standard error reduction (PSER) stopping rule uses the predictive posterior variance to determine the reduction in standard error that would result from the administration of additional items. The performance of the PSER was…
Descriptors: Item Banks, Adaptive Testing, Computer Assisted Testing, Evaluation Methods
Frey, Andreas; Seitz, Nicki-Nils – Educational and Psychological Measurement, 2011
The usefulness of multidimensional adaptive testing (MAT) for the assessment of student literacy in the Programme for International Student Assessment (PISA) was examined within a real data simulation study. The responses of N = 14,624 students who participated in the PISA assessments of the years 2000, 2003, and 2006 in Germany were used to…
Descriptors: Adaptive Testing, Literacy, Academic Achievement, Achievement Tests
Cheng, Ying – Educational and Psychological Measurement, 2010
This article proposes a new item selection method, namely, the modified maximum global discrimination index (MMGDI) method, for cognitive diagnostic computerized adaptive testing (CD-CAT). The new method captures two aspects of the appeal of an item: (a) the amount of contribution it can make toward adequate coverage of every attribute and (b) the…
Descriptors: Cognitive Tests, Diagnostic Tests, Computer Assisted Testing, Adaptive Testing
Wang, Wen-Chung; Liu, Chen-Wei – Educational and Psychological Measurement, 2011
The generalized graded unfolding model (GGUM) has been recently developed to describe item responses to Likert items (agree-disagree) in attitude measurement. In this study, the authors (a) developed two item selection methods in computerized classification testing under the GGUM, the current estimate/ability confidence interval method and the cut…
Descriptors: Computer Assisted Testing, Adaptive Testing, Classification, Item Response Theory
Chang, Shu-Ren; Plake, Barbara S.; Kramer, Gene A.; Lien, Shu-Mei – Educational and Psychological Measurement, 2011
This study examined the amount of time that different ability-level examinees spend on questions they answer correctly or incorrectly across different pretest item blocks presented on a fixed-length, time-restricted computerized adaptive testing (CAT). Results indicate that different ability-level examinees require different amounts of time to…
Descriptors: Evidence, Test Items, Reaction Time, Adaptive Testing
Thompson, Nathan A. – Educational and Psychological Measurement, 2009
Several alternatives for item selection algorithms based on item response theory in computerized classification testing (CCT) have been suggested, with no conclusive evidence on the substantial superiority of a single method. It is argued that the lack of sizable effect is because some of the methods actually assess items very similarly through…
Descriptors: Item Response Theory, Psychoeducational Methods, Cutting Scores, Simulation
Wang, Shudong; Jiao, Hong; Young, Michael J.; Brooks, Thomas; Olson, John – Educational and Psychological Measurement, 2007
This study conducted a meta-analysis of computer-based and paper-and-pencil administration mode effects on K-12 student mathematics tests. Both initial and final results based on fixed- and random-effects models are presented. The results based on the final selected studies with homogeneous effect sizes show that the administration mode had no…
Descriptors: Meta Analysis, Mathematics Tests, Elementary Secondary Education, Effect Size
Breithaupt, Krista; Hare, Donovan R. – Educational and Psychological Measurement, 2007
Many challenges exist for high-stakes testing programs offering continuous computerized administration. The automated assembly of test questions to exactly meet content and other requirements, provide uniformity, and control item exposure can be modeled and solved by mixed-integer programming (MIP) methods. A case study of the computerized…
Descriptors: Testing Programs, Psychometrics, Certification, Accounting
Kim, Do-Hong; Huynh, Huynh – Educational and Psychological Measurement, 2008
The current study compared student performance between paper-and-pencil testing (PPT) and computer-based testing (CBT) on a large-scale statewide end-of-course English examination. Analyses were conducted at both the item and test levels. The overall results suggest that scores obtained from PPT and CBT were comparable. However, at the content…
Descriptors: Reading Comprehension, Computer Assisted Testing, Factor Analysis, Comparative Testing
Penfield, Randall D. – Educational and Psychological Measurement, 2007
The standard error of the maximum likelihood ability estimator is commonly estimated by evaluating the test information function at an examinee's current maximum likelihood estimate (a point estimate) of ability. Because the test information function evaluated at the point estimate may differ from the test information function evaluated at an…
Descriptors: Simulation, Adaptive Testing, Computation, Maximum Likelihood Statistics
Wang, Shudong; Jiao, Hong; Young, Michael J.; Brooks, Thomas; Olson, John – Educational and Psychological Measurement, 2008
In recent years, computer-based testing (CBT) has grown in popularity, is increasingly being implemented across the United States, and will likely become the primary mode for delivering tests in the future. Although CBT offers many advantages over traditional paper-and-pencil testing, assessment experts, researchers, practitioners, and users have…
Descriptors: Elementary Secondary Education, Reading Achievement, Computer Assisted Testing, Comparative Analysis

Dwight, Stephen A.; Feigelson, Melissa E. – Educational and Psychological Measurement, 2000
Conducted a meta-analysis to determine the extent to which the computer administration of a measure influences socially desirable responding. Discusses implications of the findings about impression management in terms of how they contribute to the explication of the construct of social desirability and cross-mode equivalence. (Author/SLD)
Descriptors: Computer Assisted Testing, Meta Analysis, Social Desirability
Previous Page | Next Page ยป
Pages: 1 | 2