NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers3
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 21 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Hasibe Yahsi Sari; Hulya Kelecioglu – International Journal of Assessment Tools in Education, 2025
The aim of the study is to examine the effect of polytomous item ratio on ability estimation in different conditions in multistage tests (MST) using mixed tests. The study is simulation-based research. In the PISA 2018 application, the ability parameters of the individuals and the item pool were created by using the item parameters estimated from…
Descriptors: Test Items, Test Format, Accuracy, Test Length
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ebru Dogruöz; Hülya Kelecioglu – International Journal of Assessment Tools in Education, 2024
In this research, multistage adaptive tests (MST) were compared according to sample size, panel pattern and module length for top-down and bottom-up test assembly methods. Within the scope of the research, data from PISA 2015 were used and simulation studies were conducted according to the parameters estimated from these data. Analysis results for…
Descriptors: Adaptive Testing, Test Construction, Foreign Countries, Achievement Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Saskia van Laar; Johan Braeken – International Journal of Testing, 2024
This study examined the impact of two questionnaire characteristics, scale position and questionnaire length, on the prevalence of random responders in the TIMSS 2015 eighth-grade student questionnaire. While there was no support for an absolute effect of questionnaire length, we did find a positive effect for scale position, with an increase of…
Descriptors: Middle School Students, Grade 8, Questionnaires, Test Length
Peer reviewed Peer reviewed
Direct linkDirect link
Karadavut, Tugba; Cohen, Allan S.; Kim, Seock-Ho – Measurement: Interdisciplinary Research and Perspectives, 2020
Mixture Rasch (MixRasch) models conventionally assume normal distributions for latent ability. Previous research has shown that the assumption of normality is often unmet in educational and psychological measurement. When normality is assumed, asymmetry in the actual latent ability distribution has been shown to result in extraction of spurious…
Descriptors: Item Response Theory, Ability, Statistical Distributions, Sample Size
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Goegan, Lauren D.; Harrison, Gina L. – Learning Disabilities: A Contemporary Journal, 2017
The effects of extended time on the writing performance of university students with learning disabilities (LD) was examined. Thirty-eight students (19 LD; 19 non-LD) completed a collection of cognitive, linguistic, and literacy measures, and wrote essays under regular and extended time conditions. Limited evidence was found to support the…
Descriptors: Foreign Countries, Undergraduate Students, Testing Accommodations, Learning Disabilities
Peer reviewed Peer reviewed
Direct linkDirect link
Wyse, Adam E.; Hao, Shiqi – Applied Psychological Measurement, 2012
This article introduces two new classification consistency indices that can be used when item response theory (IRT) models have been applied. The new indices are shown to be related to Rudner's classification accuracy index and Guo's classification accuracy index. The Rudner- and Guo-based classification accuracy and consistency indices are…
Descriptors: Item Response Theory, Classification, Accuracy, Reliability
Peer reviewed Peer reviewed
Owen, Steven V.; Froman, Robin D. – Educational and Psychological Measurement, 1987
To test further for efficacy of three-option achievement items, parallel three- and five-option item tests were distributed randomly to college students. Results showed no differences in mean item difficulty, mean discrimination or total test score, but a substantial reduction in time spent on three-option items. (Author/BS)
Descriptors: Achievement Tests, Higher Education, Multiple Choice Tests, Test Format
PDF pending restoration PDF pending restoration
Reckase, Mark D. – 1979
Because latent trait models require that large numbers of items be calibrated or that testing of the same large group be repeated, item parameter estimates are often obtained by administering separate tests to different groups and "linking" the results to construct an adequate item pool. Four issues were studied, based upon the analysis…
Descriptors: Achievement Tests, High Schools, Item Banks, Mathematical Models
Scheetz, James P.; Forsyth, Robert A. – 1977
Empirical evidence is presented related to the effects of using a stratified sampling of items in multiple matrix sampling on the accuracy of estimates of the population mean. Data were obtained from a sample of 600 high school students for a 36-item mathematics test and a 40-item vocabulary test, both subtests of the Iowa Tests of Educational…
Descriptors: Achievement Tests, Difficulty Level, Item Analysis, Item Sampling
Jolly, S. Jean; And Others – 1985
Scores from the Stanford Achievement Tests administered to 50,000 students in Palm Beach County, Florida, were studied in order to determine whether the speeded nature of the reading comprehension subtest was related to inconsistencies in the score profiles. Specifically, the probable effect of random guessing was examined. Reading scores were…
Descriptors: Achievement Tests, Elementary Secondary Education, Guessing (Tests), Item Analysis
Olsen, James B.; And Others – 1986
Student achievement test scores were compared and equated, using three different testing methods: paper-administered, computer-administered, and computerized adaptive testing. The tests were developed from third and sixth grade mathematics item banks of the California Assessment Program. The paper and the computer-administered tests were identical…
Descriptors: Achievement Tests, Adaptive Testing, Comparative Testing, Computer Assisted Testing
Maurelli, Vincent A.; Weiss, David J. – 1981
A monte carlo simulation was conducted to assess the effects in an adaptive testing strategy for test batteries of varying subtest order, subtest termination criterion, and variable versus fixed entry on the psychometric properties of an existent achievement test battery. Comparisons were made among conventionally administered tests and adaptive…
Descriptors: Achievement Tests, Adaptive Testing, Computer Assisted Testing, Latent Trait Theory
Kingsbury, G. Gage; Weiss, David J. – 1981
Conventional mastery tests designed to make optimal mastery classifications were compared with fixed-length and variable-length adaptive mastery tests. Comparisons between the testing procedures were made across five content areas in an introductory biology course from tests administered to volunteers. The criterion was the student's standing in…
Descriptors: Achievement Tests, Adaptive Testing, Biology, Comparative Analysis
Brown, Joel M.; Weiss, David J. – 1977
An adaptive testing strategy is described for achievement tests covering multiple content areas. The strategy combines adaptive item selection both within and between the subtests in the multiple-subtest battery. A real-data simulation was conducted to compare the results from adaptive testing and from conventional testing, in terms of test…
Descriptors: Achievement Tests, Adaptive Testing, Branching, Comparative Analysis
Wilcox, Rand R. – 1979
Mastery tests are analyzed in terms of the number of skills to be mastered and the number of items per skill, in order that correct decisions of mastery or nonmastery will be made to a desired degree of probability. It is assumed that a random sample of skills will be selected for measurement, that each skill will be measured by the same number of…
Descriptors: Achievement Tests, Cutting Scores, Decision Making, Equivalency Tests
Previous Page | Next Page »
Pages: 1  |  2