Publication Date
| In 2026 | 0 |
| Since 2025 | 15 |
| Since 2022 (last 5 years) | 63 |
| Since 2017 (last 10 years) | 162 |
| Since 2007 (last 20 years) | 321 |
Descriptor
Source
Author
| Hambleton, Ronald K. | 15 |
| Wang, Wen-Chung | 9 |
| Livingston, Samuel A. | 6 |
| Sijtsma, Klaas | 6 |
| Wainer, Howard | 6 |
| Weiss, David J. | 6 |
| Wilcox, Rand R. | 6 |
| Cheng, Ying | 5 |
| Gessaroli, Marc E. | 5 |
| Lee, Won-Chan | 5 |
| Lewis, Charles | 5 |
| More ▼ | |
Publication Type
Education Level
Location
| Turkey | 8 |
| Australia | 7 |
| Canada | 7 |
| China | 5 |
| Netherlands | 5 |
| Japan | 4 |
| Taiwan | 4 |
| United Kingdom | 4 |
| Germany | 3 |
| Michigan | 3 |
| Singapore | 3 |
| More ▼ | |
Laws, Policies, & Programs
| Americans with Disabilities… | 1 |
| Equal Access | 1 |
| Job Training Partnership Act… | 1 |
| Race to the Top | 1 |
| Rehabilitation Act 1973… | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Tom Benton – Research Matters, 2024
Educational assessment is used throughout the world for a range of different formative and summative purposes. Wherever an assessment is developed, whether by a teacher creating a quiz for their class, or by a testing company creating a high stakes assessment, it is necessary to decide how long the test should be. Specifically, how many questions…
Descriptors: Foreign Countries, High Stakes Tests, Test Length, Test Construction
Yi-Jui I. Chen; Yi-Jhen Wu; Yi-Hsin Chen; Robin Irey – Journal of Psychoeducational Assessment, 2025
A short form of the 60-item computer-based orthographic processing assessment (long-form COPA or COPA-LF) was developed. The COPA-LF consists of five skills, including rapid perception, access, differentiation, correction, and arrangement. Thirty items from the COPA-LF were selected for the short-form COPA (COPA-SF) based on cognitive diagnostic…
Descriptors: Computer Assisted Testing, Test Length, Test Validity, Orthographic Symbols
Fellinghauer, Carolina; Debelak, Rudolf; Strobl, Carolin – Educational and Psychological Measurement, 2023
This simulation study investigated to what extent departures from construct similarity as well as differences in the difficulty and targeting of scales impact the score transformation when scales are equated by means of concurrent calibration using the partial credit model with a common person design. Practical implications of the simulation…
Descriptors: True Scores, Equated Scores, Test Items, Sample Size
Edwards, Ashley A.; Joyner, Keanan J.; Schatschneider, Christopher – Educational and Psychological Measurement, 2021
The accuracy of certain internal consistency estimators have been questioned in recent years. The present study tests the accuracy of six reliability estimators (Cronbach's alpha, omega, omega hierarchical, Revelle's omega, and greatest lower bound) in 140 simulated conditions of unidimensional continuous data with uncorrelated errors with varying…
Descriptors: Reliability, Computation, Accuracy, Sample Size
Rios, Joseph A.; Miranda, Alejandra A. – Educational Measurement: Issues and Practice, 2021
Subscore added value analyses assume invariance across test taking populations; however, this assumption may be untenable in practice as differential subdomain relationships may be present among subgroups. The purpose of this simulation study was to understand the conditions associated with subscore added value noninvariance when manipulating: (1)…
Descriptors: Scores, Test Length, Ability, Correlation
Lang, Joseph B. – Journal of Educational and Behavioral Statistics, 2023
This article is concerned with the statistical detection of copying on multiple-choice exams. As an alternative to existing permutation- and model-based copy-detection approaches, a simple randomization p-value (RP) test is proposed. The RP test, which is based on an intuitive match-score statistic, makes no assumptions about the distribution of…
Descriptors: Identification, Cheating, Multiple Choice Tests, Item Response Theory
Guo, Wenjing; Choi, Youn-Jeng – Educational and Psychological Measurement, 2023
Determining the number of dimensions is extremely important in applying item response theory (IRT) models to data. Traditional and revised parallel analyses have been proposed within the factor analysis framework, and both have shown some promise in assessing dimensionality. However, their performance in the IRT framework has not been…
Descriptors: Item Response Theory, Evaluation Methods, Factor Analysis, Guidelines
Novak, Josip; Rebernjak, Blaž – Measurement: Interdisciplinary Research and Perspectives, 2023
A Monte Carlo simulation study was conducted to examine the performance of [alpha], [lambda]2, [lambda][subscript 4], [lambda][subscript 2], [omega][subscript T], GLB[subscript MRFA], and GLB[subscript Algebraic] coefficients. Population reliability, distribution shape, sample size, test length, and number of response categories were varied…
Descriptors: Monte Carlo Methods, Evaluation Methods, Reliability, Simulation
Hakyung Sung; Sooyeon Cho; Kristopher Kyle – Language Assessment Quarterly, 2024
Lexical diversity (LD) is an important indicator of second language lexical development. Much research has investigated LD indices, with a focus on learners of English. However, further research is needed in languages that are typologically distinct from English, such as Korean. In this study, we evaluated the reliability and validity of LD…
Descriptors: Second Language Learning, Korean, Persuasive Discourse, Language Tests
Niclas Larson – Journal of the International Society for Teacher Education, 2024
This paper reports on a revision of the assessment model from the first mathematics course for pre-service teachers (PSTs) aiming for grades 5-10, at a Norwegian university. The weight of the final written exam was reduced and a new, mastery-based testing model, with weekly small tests, was introduced. Results from this study show that the PSTs…
Descriptors: High Stakes Tests, Test Length, Mathematics Tests, Preservice Teachers
José Ventura-León; Cristopher Lino-Cruz; Shirley Tocto-Muñoz; Andy Rick Sánchez-Villena – Journal of Psychoeducational Assessment, 2025
Academic and occupational success requires social intelligence, the ability to comprehend, and manage interpersonal connections. This research aims to assess and improve the Tromsø Social Intelligence Scale (TSIS) for Peruvian university students, focusing on cultural adaptability, reliability, and validity. Participants included 973 university…
Descriptors: Factor Analysis, Intelligence Tests, Test Items, Test Length
Xiuxiu Tang; Yi Zheng; Tong Wu; Kit-Tai Hau; Hua-Hua Chang – Journal of Educational Measurement, 2025
Multistage adaptive testing (MST) has been recently adopted for international large-scale assessments such as Programme for International Student Assessment (PISA). MST offers improved measurement efficiency over traditional nonadaptive tests and improved practical convenience over single-item-adaptive computerized adaptive testing (CAT). As a…
Descriptors: Reaction Time, Test Items, Achievement Tests, Foreign Countries
Xiao, Leifeng; Hau, Kit-Tai – Applied Measurement in Education, 2023
We compared coefficient alpha with five alternatives (omega total, omega RT, omega h, GLB, and coefficient H) in two simulation studies. Results showed for unidimensional scales, (a) all indices except omega h performed similarly well for most conditions; (b) alpha is still good; (c) GLB and coefficient H overestimated reliability with small…
Descriptors: Test Theory, Test Reliability, Factor Analysis, Test Length
Kotera, Yasuhiro; Conway, Elaine; Green, Pauline – British Journal of Guidance & Counselling, 2023
Academic motivation is important to students' mental health and performance. One established measure is the Academic Motivation Scale (AMS), comprising 28 items. AMS assesses intrinsic motivation, extrinsic motivation, and amotivation, which are further categorised into seven subscales. One weakness of AMS is its length. In this study, we…
Descriptors: Test Construction, Test Validity, Factor Analysis, Learning Motivation
Ayse Bilicioglu Gunes; Bayram Bicak – International Journal of Assessment Tools in Education, 2023
The main purpose of this study is to examine the Type I error and statistical power ratios of Differential Item Functioning (DIF) techniques based on different theories under different conditions. For this purpose, a simulation study was conducted by using Mantel-Haenszel (MH), Logistic Regression (LR), Lord's [chi-squared], and Raju's Areas…
Descriptors: Test Items, Item Response Theory, Error of Measurement, Test Bias

Peer reviewed
Direct link
