Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 7 |
Descriptor
Source
Educational and Psychological… | 13 |
Author
Andrich, David | 1 |
Arce-Ferrer, Alvaro J. | 1 |
Bodenhorn, Nancy | 1 |
Brown, Anna | 1 |
Dirkzwager, A. | 1 |
Egberink, Iris J. L. | 1 |
Eggen, T. J. H. M. | 1 |
Frey, Andreas | 1 |
Goldhammer, Frank | 1 |
Guzman, Elvira Martinez | 1 |
Kim, Do-Hong | 1 |
More ▼ |
Publication Type
Journal Articles | 13 |
Reports - Research | 11 |
Reports - Evaluative | 2 |
Speeches/Meeting Papers | 1 |
Audience
Location
Germany | 3 |
Netherlands | 2 |
Africa | 1 |
Asia | 1 |
Australia | 1 |
Canada | 1 |
Japan | 1 |
Mexico | 1 |
Spain | 1 |
United Kingdom | 1 |
United States | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
Program for International… | 2 |
Minnesota Multiphasic… | 1 |
Raven Progressive Matrices | 1 |
What Works Clearinghouse Rating
Lin, Yin; Brown, Anna – Educational and Psychological Measurement, 2017
A fundamental assumption in computerized adaptive testing is that item parameters are invariant with respect to context--items surrounding the administered item. This assumption, however, may not hold in forced-choice (FC) assessments, where explicit comparisons are made between items included in the same block. We empirically examined the…
Descriptors: Personality Measures, Measurement Techniques, Context Effect, Test Items
Egberink, Iris J. L.; Meijer, Rob R.; Tendeiro, Jorge N. – Educational and Psychological Measurement, 2015
A popular method to assess measurement invariance of a particular item is based on likelihood ratio tests with all other items as anchor items. The results of this method are often only reported in terms of statistical significance, and researchers proposed different methods to empirically select anchor items. It is unclear, however, how many…
Descriptors: Personality Measures, Computer Assisted Testing, Measurement, Test Items
Zehner, Fabian; Sälzer, Christine; Goldhammer, Frank – Educational and Psychological Measurement, 2016
Automatic coding of short text responses opens new doors in assessment. We implemented and integrated baseline methods of natural language processing and statistical modelling by means of software components that are available under open licenses. The accuracy of automatic text coding is demonstrated by using data collected in the "Programme…
Descriptors: Educational Assessment, Coding, Automation, Responses
Frey, Andreas; Seitz, Nicki-Nils – Educational and Psychological Measurement, 2011
The usefulness of multidimensional adaptive testing (MAT) for the assessment of student literacy in the Programme for International Student Assessment (PISA) was examined within a real data simulation study. The responses of N = 14,624 students who participated in the PISA assessments of the years 2000, 2003, and 2006 in Germany were used to…
Descriptors: Adaptive Testing, Literacy, Academic Achievement, Achievement Tests
Schroeders, Ulrich; Wilhelm, Oliver – Educational and Psychological Measurement, 2011
Whether an ability test delivered on either paper or computer provides the same information is an important question in applied psychometrics. Besides the validity, it is also the fairness of a measure that is at stake if the test medium affects performance. This study provides a comprehensive review of existing equivalence research in the field…
Descriptors: Reading Comprehension, Listening Comprehension, English (Second Language), Language Tests
Ng, Kok-Mun; Wang, Chuang; Kim, Do-Hong; Bodenhorn, Nancy – Educational and Psychological Measurement, 2010
The authors investigated the factor structure of the Schutte Self-Report Emotional Intelligence (SSREI) scale on international students. Via confirmatory factor analysis, the authors tested the fit of the models reported by Schutte et al. and five other studies to data from 640 international students in the United States. Results show that…
Descriptors: Emotional Intelligence, Factor Structure, Measures (Individuals), Factor Analysis
Arce-Ferrer, Alvaro J.; Guzman, Elvira Martinez – Educational and Psychological Measurement, 2009
This study investigates the effect of mode of administration of the Raven Standard Progressive Matrices test on distribution, accuracy, and meaning of raw scores. A random sample of high school students take counterbalanced paper-and-pencil and computer-based administrations of the test and answer a questionnaire surveying preferences for…
Descriptors: Factor Analysis, Raw Scores, Statistical Analysis, Computer Assisted Testing

Eggen, T. J. H. M.; Straetmans, G. J. J. M. – Educational and Psychological Measurement, 2000
Studied the use of adaptive testing when examinees are classified into three categories. Established testing algorithms with two different statistical computation procedures and evaluated them through simulation using an operative item bank from Dutch basic adult education. Results suggest a reduction of at least 22% in the mean number of items…
Descriptors: Adaptive Testing, Adult Education, Algorithms, Classification

Sukigara, Masune – Educational and Psychological Measurement, 1996
The New Japanese version of the Minnesota Multiphasic Personality Inventory (MMPI) was administered twice to 200 Japanese female college students to verify the equivalence of the computer- and booklet-administered formats. For four scales, scores from the computer version were statistically significantly higher than those from the booklet…
Descriptors: College Students, Computer Assisted Testing, Females, Foreign Countries

Dirkzwager, A. – Educational and Psychological Measurement, 1996
Testing with personal probabilities eliminates guessing whether the subjects are well calibrated. A probability testing study with 47 Dutch elementary school children who used an interactive computer program shows that even 11-year-olds can estimate their personal probabilities correctly. (SLD)
Descriptors: Computer Assisted Testing, Elementary Education, Elementary School Students, Estimation (Mathematics)

Ponsoda, Vincente; And Others – Educational and Psychological Measurement, 1997
A study involving 209 Spanish high school students compared computer-based English vocabulary tests: (1) a self-adapted test (SAT); (2) a computerized adaptive test (CAT); (3) a conventional test; and (4) a test combining SAT and CAT. No statistically significant differences were found among test types for estimated ability or posttest anxiety.…
Descriptors: Ability, Adaptive Testing, Anxiety, Comparative Analysis

Styles, Irene; Andrich, David – Educational and Psychological Measurement, 1993
This paper describes the use of the Rasch model to help implement computerized administration of the standard and advanced forms of Raven's Progressive Matrices (RPM), to compare relative item difficulties, and to convert scores between the standard and advanced forms. The sample consisted of 95 girls and 95 boys in Australia. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Difficulty Level, Elementary Education

Skakun, Ernest N.; And Others – Educational and Psychological Measurement, 1979
Factor analysis was used to determine whether computerized patient management problems had the same factor structure as multiple choice examinations and rating scales. It was determined that the factor structure was similar to the examinations but not the rating scale. (JKS)
Descriptors: Comparative Testing, Computer Assisted Testing, Computer Programs, Factor Structure