NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 46 to 60 of 624 results Save | Export
Derek Sauder – ProQuest LLC, 2020
The Rasch model is commonly used to calibrate multiple choice items. However, the sample sizes needed to estimate the Rasch model can be difficult to attain (e.g., consider a small testing company trying to pretest new items). With small sample sizes, auxiliary information besides the item responses may improve estimation of the item parameters.…
Descriptors: Item Response Theory, Sample Size, Computation, Test Length
Peer reviewed Peer reviewed
Direct linkDirect link
Ellis, Jules L. – Educational and Psychological Measurement, 2021
This study develops a theoretical model for the costs of an exam as a function of its duration. Two kind of costs are distinguished: (1) the costs of measurement errors and (2) the costs of the measurement. Both costs are expressed in time of the student. Based on a classical test theory model, enriched with assumptions on the context, the costs…
Descriptors: Test Length, Models, Error of Measurement, Measurement
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Baris Pekmezci, Fulya; Sengul Avsar, Asiye – International Journal of Assessment Tools in Education, 2021
There is a great deal of research about item response theory (IRT) conducted by simulations. Item and ability parameters are estimated with varying numbers of replications under different test conditions. However, it is not clear what the appropriate number of replications should be. The aim of the current study is to develop guidelines for the…
Descriptors: Item Response Theory, Computation, Accuracy, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Braun, Virginia; Clarke, Victoria; Boulton, Elicia; Davey, Louise; McEvoy, Charlotte – International Journal of Social Research Methodology, 2021
Fully "qualitative" surveys, which prioritise qualitative research values, and harness the rich potential of qualitative data, have much to offer qualitative researchers, especially given online delivery options. Yet the method remains underutilised, and there is little in the way of methodological discussion of qualitative surveys.…
Descriptors: Online Surveys, Qualitative Research, Social Science Research, Disclosure
Peer reviewed Peer reviewed
Direct linkDirect link
Jingwen Wang; Ying Zheng; Yi Zou – Language Testing in Asia, 2024
Pearson Test of English Academic (PTE Academic), a high-stakes English language proficiency test, underwent substantial revisions in 2021. The test duration was reduced from 3 h to 2 h by reducing specific task numbers and sections. This study investigates the impact of these changes on teachers' perceptions and teaching practices, areas…
Descriptors: Foreign Countries, High Stakes Tests, Language Proficiency, Language Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Wyse, Adam E.; McBride, James R. – Measurement: Interdisciplinary Research and Perspectives, 2022
A common practical challenge is how to assign ability estimates to all incorrect and all correct response patterns when using item response theory (IRT) models and maximum likelihood estimation (MLE) since ability estimates for these types of responses equal -8 or +8. This article uses a simulation study and data from an operational K-12…
Descriptors: Scores, Adaptive Testing, Computer Assisted Testing, Test Length
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Shaojie; Zhang, Minqiang; Lee, Won-Chan; Huang, Feifei; Li, Zonglong; Li, Yixing; Yu, Sufang – Journal of Educational Measurement, 2022
Traditional IRT characteristic curve linking methods ignore parameter estimation errors, which may undermine the accuracy of estimated linking constants. Two new linking methods are proposed that take into account parameter estimation errors. The item- (IWCC) and test-information-weighted characteristic curve (TWCC) methods employ weighting…
Descriptors: Item Response Theory, Error of Measurement, Accuracy, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Kárász, Judit T.; Széll, Krisztián; Takács, Szabolcs – Quality Assurance in Education: An International Perspective, 2023
Purpose: Based on the general formula, which depends on the length and difficulty of the test, the number of respondents and the number of ability levels, this study aims to provide a closed formula for the adaptive tests with medium difficulty (probability of solution is p = 1/2) to determine the accuracy of the parameters for each item and in…
Descriptors: Test Length, Probability, Comparative Analysis, Difficulty Level
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kiliç, Abdullah Faruk; Uysal, Ibrahim – Turkish Journal of Education, 2019
In this study, the purpose is to compare factor retention methods under simulation conditions. For this purpose, simulations conditions with a number of factors (1, 2 [simple]), sample sizes (250, 1.000, and 3.000), number of items (20, 30), average factor loading (0.50, 0.70), and correlation matrix (Pearson Product Moment [PPM] and Tetrachoric)…
Descriptors: Simulation, Factor Structure, Sample Size, Test Length
Peer reviewed Peer reviewed
Direct linkDirect link
Saskia van Laar; Johan Braeken – International Journal of Testing, 2024
This study examined the impact of two questionnaire characteristics, scale position and questionnaire length, on the prevalence of random responders in the TIMSS 2015 eighth-grade student questionnaire. While there was no support for an absolute effect of questionnaire length, we did find a positive effect for scale position, with an increase of…
Descriptors: Middle School Students, Grade 8, Questionnaires, Test Length
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sari, Halil Ibrahim – International Journal of Psychology and Educational Studies, 2020
Due to low cost monte-carlo (MC) simulations have been extensively conducted in the area of educational measurement. However, the results derived from MC studies may not always be generalizable to operational studies. The purpose of this study was to provide a methodological discussion on the other different types of simulation methods, and run…
Descriptors: Computer Assisted Testing, Adaptive Testing, Simulation, Test Length
Dong, Yixiao; Clements, Douglas H.; Day-Hess, Crystal A.; Sarama, Julie; Dumas, Denis – Journal of Psychoeducational Assessment, 2021
Psychometric work with young children faces the particular challenge that children's attention spans are relatively short, and therefore, shorter assessments are required while retaining comprehensive coverage. This article reports on three empirical studies that encompass the development and validation of the research-based early mathematics…
Descriptors: Young Children, Numeracy, Test Construction, Test Validity
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Uysal, Ibrahim; Sahin-Kürsad, Merve; Kiliç, Abdullah Faruk – Participatory Educational Research, 2022
The aim of the study was to examine the common items in the mixed format (e.g., multiple-choices and essay items) contain parameter drifts in the test equating processes performed with the common item nonequivalent groups design. In this study, which was carried out using Monte Carlo simulation with a fully crossed design, the factors of test…
Descriptors: Test Items, Test Format, Item Response Theory, Equated Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Sauder, Derek; DeMars, Christine – Applied Measurement in Education, 2020
We used simulation techniques to assess the item-level and familywise Type I error control and power of an IRT item-fit statistic, the "S-X"[superscript 2]. Previous research indicated that the "S-X"[superscript 2] has good Type I error control and decent power, but no previous research examined familywise Type I error control.…
Descriptors: Item Response Theory, Test Items, Sample Size, Test Length
Peer reviewed Peer reviewed
Direct linkDirect link
Raborn, Anthony W.; Leite, Walter L.; Marcoulides, Katerina M. – Educational and Psychological Measurement, 2020
This study compares automated methods to develop short forms of psychometric scales. Obtaining a short form that has both adequate internal structure and strong validity with respect to relationships with other variables is difficult with traditional methods of short-form development. Metaheuristic algorithms can select items for short forms while…
Descriptors: Test Construction, Automation, Heuristics, Mathematics
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  42