Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 7 |
Since 2016 (last 10 years) | 20 |
Since 2006 (last 20 years) | 57 |
Descriptor
Computation | 61 |
Test Length | 61 |
Item Response Theory | 40 |
Test Items | 28 |
Sample Size | 26 |
Accuracy | 19 |
Simulation | 19 |
Maximum Likelihood Statistics | 15 |
Bayesian Statistics | 14 |
Error of Measurement | 14 |
Correlation | 12 |
More ▼ |
Source
Author
Wang, Wen-Chung | 4 |
Cheng, Ying | 3 |
He, Wei | 2 |
Kilic, Abdullah Faruk | 2 |
Lathrop, Quinn N. | 2 |
Lee, Yi-Hsuan | 2 |
Liu, Chen-Wei | 2 |
Paek, Insu | 2 |
Zhang, Jinming | 2 |
de la Torre, Jimmy | 2 |
Atar, Burcu | 1 |
More ▼ |
Publication Type
Journal Articles | 51 |
Reports - Research | 38 |
Reports - Evaluative | 14 |
Dissertations/Theses -… | 8 |
Reports - Descriptive | 1 |
Education Level
Higher Education | 2 |
Postsecondary Education | 2 |
Early Childhood Education | 1 |
Elementary Secondary Education | 1 |
High Schools | 1 |
Preschool Education | 1 |
Secondary Education | 1 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
National Assessment of… | 1 |
National Longitudinal Study… | 1 |
Trends in International… | 1 |
What Works Clearinghouse Rating
He, Yinhong – Journal of Educational Measurement, 2023
Back random responding (BRR) behavior is one of the commonly observed careless response behaviors. Accurately detecting BRR behavior can improve test validities. Yu and Cheng (2019) showed that the change point analysis (CPA) procedure based on weighted residual (CPA-WR) performed well in detecting BRR. Compared with the CPA procedure, the…
Descriptors: Test Validity, Item Response Theory, Measurement, Monte Carlo Methods
Edwards, Ashley A.; Joyner, Keanan J.; Schatschneider, Christopher – Educational and Psychological Measurement, 2021
The accuracy of certain internal consistency estimators have been questioned in recent years. The present study tests the accuracy of six reliability estimators (Cronbach's alpha, omega, omega hierarchical, Revelle's omega, and greatest lower bound) in 140 simulated conditions of unidimensional continuous data with uncorrelated errors with varying…
Descriptors: Reliability, Computation, Accuracy, Sample Size
Chenchen Ma; Jing Ouyang; Chun Wang; Gongjun Xu – Grantee Submission, 2024
Survey instruments and assessments are frequently used in many domains of social science. When the constructs that these assessments try to measure become multifaceted, multidimensional item response theory (MIRT) provides a unified framework and convenient statistical tool for item analysis, calibration, and scoring. However, the computational…
Descriptors: Algorithms, Item Response Theory, Scoring, Accuracy
Kim, Hyung Jin; Lee, Won-Chan – Journal of Educational Measurement, 2022
Orlando and Thissen (2000) introduced the "S - X[superscript 2]" item-fit index for testing goodness-of-fit with dichotomous item response theory (IRT) models. This study considers and evaluates an alternative approach for computing "S - X[superscript 2]" values and other factors associated with collapsing tables of observed…
Descriptors: Goodness of Fit, Test Items, Item Response Theory, Computation
Derek Sauder – ProQuest LLC, 2020
The Rasch model is commonly used to calibrate multiple choice items. However, the sample sizes needed to estimate the Rasch model can be difficult to attain (e.g., consider a small testing company trying to pretest new items). With small sample sizes, auxiliary information besides the item responses may improve estimation of the item parameters.…
Descriptors: Item Response Theory, Sample Size, Computation, Test Length
Baris Pekmezci, Fulya; Sengul Avsar, Asiye – International Journal of Assessment Tools in Education, 2021
There is a great deal of research about item response theory (IRT) conducted by simulations. Item and ability parameters are estimated with varying numbers of replications under different test conditions. However, it is not clear what the appropriate number of replications should be. The aim of the current study is to develop guidelines for the…
Descriptors: Item Response Theory, Computation, Accuracy, Monte Carlo Methods
Öztürk, Nagihan Boztunç – Universal Journal of Educational Research, 2019
In this study, how the length and characteristics of routing module in different panel designs affect measurement precision is examined. In the scope of the study, six different routing module length, nine different routing module characteristics, and two different panel design are handled. At the end of the study, the effects of conditions on…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Length, Test Format
Zhou, Sherry; Huggins-Manley, Anne Corinne – Educational and Psychological Measurement, 2020
The semi-generalized partial credit model (Semi-GPCM) has been proposed as a unidimensional modeling method for handling not applicable scale responses and neutral scale responses, and it has been suggested that the model may be of use in handling missing data in scale items. The purpose of this study is to evaluate the ability of the…
Descriptors: Models, Statistical Analysis, Response Style (Tests), Test Items
Patton, Jeffrey M.; Cheng, Ying; Hong, Maxwell; Diao, Qi – Journal of Educational and Behavioral Statistics, 2019
In psychological and survey research, the prevalence and serious consequences of careless responses from unmotivated participants are well known. In this study, we propose to iteratively detect careless responders and cleanse the data by removing their responses. The careless responders are detected using person-fit statistics. In two simulation…
Descriptors: Test Items, Response Style (Tests), Identification, Computation
Kilic, Abdullah Faruk; Uysal, Ibrahim; Atar, Burcu – International Journal of Assessment Tools in Education, 2020
This Monte Carlo simulation study aimed to investigate confirmatory factor analysis (CFA) estimation methods under different conditions, such as sample size, distribution of indicators, test length, average factor loading, and factor structure. Binary data were generated to compare the performance of maximum likelihood (ML), mean and variance…
Descriptors: Factor Analysis, Computation, Methods, Sample Size
Kilic, Abdullah Faruk; Dogan, Nuri – International Journal of Assessment Tools in Education, 2021
Weighted least squares (WLS), weighted least squares mean-and-variance-adjusted (WLSMV), unweighted least squares mean-and-variance-adjusted (ULSMV), maximum likelihood (ML), robust maximum likelihood (MLR) and Bayesian estimation methods were compared in mixed item response type data via Monte Carlo simulation. The percentage of polytomous items,…
Descriptors: Factor Analysis, Computation, Least Squares Statistics, Maximum Likelihood Statistics
Gawliczek, Piotr; Krykun, Viktoriia; Tarasenko, Nataliya; Tyshchenko, Maksym; Shapran, Oleksandr – Advanced Education, 2021
The article deals with the innovative, cutting age solution within the language testing realm, namely computer adaptive language testing (CALT) in accordance with the NATO Standardization Agreement 6001 (NATO STANAG 6001) requirements for further implementation in foreign language training of personnel of the Armed Forces of Ukraine (AF of…
Descriptors: Computer Assisted Testing, Adaptive Testing, Language Tests, Second Language Instruction
Yavuz, Guler; Hambleton, Ronald K. – Educational and Psychological Measurement, 2017
Application of MIRT modeling procedures is dependent on the quality of parameter estimates provided by the estimation software and techniques used. This study investigated model parameter recovery of two popular MIRT packages, BMIRT and flexMIRT, under some common measurement conditions. These packages were specifically selected to investigate the…
Descriptors: Item Response Theory, Models, Comparative Analysis, Computer Software
Huang, Hung-Yu – Educational and Psychological Measurement, 2017
Mixture item response theory (IRT) models have been suggested as an efficient method of detecting the different response patterns derived from latent classes when developing a test. In testing situations, multiple latent traits measured by a battery of tests can exhibit a higher-order structure, and mixtures of latent classes may occur on…
Descriptors: Item Response Theory, Models, Bayesian Statistics, Computation
Yildiz, Mustafa – ProQuest LLC, 2017
Student misconceptions have been studied for decades from a curricular/instructional perspective and from the assessment/test level perspective. Numerous misconception assessment tools have been developed in order to measure students' misconceptions relative to the correct content. Often, these tools are used to make a variety of educational…
Descriptors: Misconceptions, Students, Item Response Theory, Models