Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 8 |
Descriptor
Computer Assisted Testing | 10 |
Evaluation Methods | 10 |
Adaptive Testing | 3 |
Factor Analysis | 3 |
Measures (Individuals) | 3 |
Test Items | 3 |
Classification | 2 |
Comparative Analysis | 2 |
Data Analysis | 2 |
Foreign Countries | 2 |
Item Banks | 2 |
More ▼ |
Source
Educational and Psychological… | 10 |
Author
Arce-Ferrer, Alvaro J. | 1 |
Batinic, Bernad | 1 |
Bodenhorn, Nancy | 1 |
Breland, Hunter | 1 |
Chang, Shu-Ren | 1 |
Choi, Seung W. | 1 |
Dodd, Barbara G. | 1 |
Gnambs, Timo | 1 |
Gorham, Jerry | 1 |
Grady, Matthew W. | 1 |
Guzman, Elvira Martinez | 1 |
More ▼ |
Publication Type
Journal Articles | 10 |
Reports - Research | 5 |
Reports - Evaluative | 4 |
Book/Product Reviews | 1 |
Education Level
High Schools | 2 |
Higher Education | 1 |
Audience
Location
Africa | 1 |
Asia | 1 |
Mexico | 1 |
United States | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Markus T. Jansen; Ralf Schulze – Educational and Psychological Measurement, 2024
Thurstonian forced-choice modeling is considered to be a powerful new tool to estimate item and person parameters while simultaneously testing the model fit. This assessment approach is associated with the aim of reducing faking and other response tendencies that plague traditional self-report trait assessments. As a result of major recent…
Descriptors: Factor Analysis, Models, Item Analysis, Evaluation Methods
Gönülates, Emre – Educational and Psychological Measurement, 2019
This article introduces the Quality of Item Pool (QIP) Index, a novel approach to quantifying the adequacy of an item pool of a computerized adaptive test for a given set of test specifications and examinee population. This index ranges from 0 to 1, with values close to 1 indicating the item pool presents optimum items to examinees throughout the…
Descriptors: Item Banks, Adaptive Testing, Computer Assisted Testing, Error of Measurement
Jiao, Hong; Liu, Junhui; Haynie, Kathleen; Woo, Ada; Gorham, Jerry – Educational and Psychological Measurement, 2012
This study explored the impact of partial credit scoring of one type of innovative items (multiple-response items) in a computerized adaptive version of a large-scale licensure pretest and operational test settings. The impacts of partial credit scoring on the estimation of the ability parameters and classification decisions in operational test…
Descriptors: Test Items, Computer Assisted Testing, Measures (Individuals), Scoring
Choi, Seung W.; Grady, Matthew W.; Dodd, Barbara G. – Educational and Psychological Measurement, 2011
The goal of the current study was to introduce a new stopping rule for computerized adaptive testing (CAT). The predicted standard error reduction (PSER) stopping rule uses the predictive posterior variance to determine the reduction in standard error that would result from the administration of additional items. The performance of the PSER was…
Descriptors: Item Banks, Adaptive Testing, Computer Assisted Testing, Evaluation Methods
Gnambs, Timo; Batinic, Bernad – Educational and Psychological Measurement, 2011
Computer-adaptive classification tests focus on classifying respondents in different proficiency groups (e.g., for pass/fail decisions). To date, adaptive classification testing has been dominated by research on dichotomous response formats and classifications in two groups. This article extends this line of research to polytomous classification…
Descriptors: Test Length, Computer Assisted Testing, Classification, Test Items
Chang, Shu-Ren; Plake, Barbara S.; Kramer, Gene A.; Lien, Shu-Mei – Educational and Psychological Measurement, 2011
This study examined the amount of time that different ability-level examinees spend on questions they answer correctly or incorrectly across different pretest item blocks presented on a fixed-length, time-restricted computerized adaptive testing (CAT). Results indicate that different ability-level examinees require different amounts of time to…
Descriptors: Evidence, Test Items, Reaction Time, Adaptive Testing
Ng, Kok-Mun; Wang, Chuang; Kim, Do-Hong; Bodenhorn, Nancy – Educational and Psychological Measurement, 2010
The authors investigated the factor structure of the Schutte Self-Report Emotional Intelligence (SSREI) scale on international students. Via confirmatory factor analysis, the authors tested the fit of the models reported by Schutte et al. and five other studies to data from 640 international students in the United States. Results show that…
Descriptors: Emotional Intelligence, Factor Structure, Measures (Individuals), Factor Analysis
Arce-Ferrer, Alvaro J.; Guzman, Elvira Martinez – Educational and Psychological Measurement, 2009
This study investigates the effect of mode of administration of the Raven Standard Progressive Matrices test on distribution, accuracy, and meaning of raw scores. A random sample of high school students take counterbalanced paper-and-pencil and computer-based administrations of the test and answer a questionnaire surveying preferences for…
Descriptors: Factor Analysis, Raw Scores, Statistical Analysis, Computer Assisted Testing
Smither, James W.; Walker, Alan G.; Yap, Michael K. T. – Educational and Psychological Measurement, 2004
In this study, 5,257employees provided upward feedback ratings for 759 target managers who had the option of having their subordinates rate them using a traditional paper-and-pencil (opscan) response mode or using the company's intranet. Preliminary analyses showed mean online ratings were more favorable than were mean paper-and-pencil ratings (d…
Descriptors: Feedback, Evaluation Methods, Measures (Individuals), Responses
Breland, Hunter; Lee, Yong-Won; Muraki, Eiji – Educational and Psychological Measurement, 2005
Eighty-three Test of English as a Foreign Language (TOEFL) writing prompts administered via computer-based testing between July 1998 and August 2000 were examined for differences attributable to the response mode (handwriting or word processing) chosen by examinees. Differences were examined statistically using polytomous logistic regression. A…
Descriptors: Evaluation Methods, Word Processing, Handwriting, Effect Size