Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 3 |
Since 2016 (last 10 years) | 35 |
Since 2006 (last 20 years) | 66 |
Descriptor
Difficulty Level | 72 |
Item Response Theory | 72 |
Statistical Analysis | 72 |
Test Items | 59 |
Foreign Countries | 25 |
Multiple Choice Tests | 15 |
Psychometrics | 15 |
Correlation | 14 |
Comparative Analysis | 13 |
Models | 13 |
Computation | 11 |
More ▼ |
Source
Author
Publication Type
Journal Articles | 61 |
Reports - Research | 60 |
Reports - Evaluative | 8 |
Dissertations/Theses -… | 3 |
Speeches/Meeting Papers | 3 |
Tests/Questionnaires | 3 |
Numerical/Quantitative Data | 2 |
Information Analyses | 1 |
Reports - Descriptive | 1 |
Education Level
Secondary Education | 18 |
Higher Education | 14 |
Postsecondary Education | 11 |
Elementary Education | 10 |
Junior High Schools | 9 |
Middle Schools | 9 |
Grade 8 | 6 |
High Schools | 5 |
Elementary Secondary Education | 4 |
Grade 5 | 3 |
Grade 7 | 3 |
More ▼ |
Audience
Location
Germany | 5 |
Australia | 4 |
Greece | 2 |
Turkey | 2 |
Austria | 1 |
Belgium | 1 |
Brazil | 1 |
California | 1 |
Canada | 1 |
Cyprus | 1 |
France | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Tang, Xiaodan; Karabatsos, George; Chen, Haiqin – Applied Measurement in Education, 2020
In applications of item response theory (IRT) models, it is known that empirical violations of the local independence (LI) assumption can significantly bias parameter estimates. To address this issue, we propose a threshold-autoregressive item response theory (TAR-IRT) model that additionally accounts for order dependence among the item responses…
Descriptors: Item Response Theory, Test Items, Models, Computation
Akin-Arikan, Çigdem; Gelbal, Selahattin – Eurasian Journal of Educational Research, 2021
Purpose: This study aims to compare the performances of Item Response Theory (IRT) equating and kernel equating (KE) methods based on equating errors (RMSD) and standard error of equating (SEE) using the anchor item nonequivalent groups design. Method: Within this scope, a set of conditions, including ability distribution, type of anchor items…
Descriptors: Equated Scores, Item Response Theory, Test Items, Statistical Analysis
Lozano, José H.; Revuelta, Javier – Applied Measurement in Education, 2021
The present study proposes a Bayesian approach for estimating and testing the operation-specific learning model, a variant of the linear logistic test model that allows for the measurement of the learning that occurs during a test as a result of the repeated use of the operations involved in the items. The advantages of using a Bayesian framework…
Descriptors: Bayesian Statistics, Computation, Learning, Testing
Lim, Euijin; Lee, Won-Chan – Applied Measurement in Education, 2020
The purpose of this study is to address the necessity of subscore equating and to evaluate the performance of various equating methods for subtests. Assuming the random groups design and number-correct scoring, this paper analyzed real data and simulated data with four study factors including test dimensionality, subtest length, form difference in…
Descriptors: Equated Scores, Test Length, Test Format, Difficulty Level
Lenhard, Wolfgang; Lenhard, Alexandra – Educational and Psychological Measurement, 2021
The interpretation of psychometric test results is usually based on norm scores. We compared semiparametric continuous norming (SPCN) with conventional norming methods by simulating results for test scales with different item numbers and difficulties via an item response theory approach. Subsequently, we modeled the norm scores based on random…
Descriptors: Test Norms, Scores, Regression (Statistics), Test Items
Ilhan, Mustafa – International Journal of Assessment Tools in Education, 2019
This study investigated the effectiveness of statistical adjustments applied to rater bias in many-facet Rasch analysis. Some changes were first made in the dataset that did not include "rater × examinee" bias to cause to have "rater × examinee" bias. Later, bias adjustment was applied to rater bias included in the data file,…
Descriptors: Statistical Analysis, Item Response Theory, Evaluators, Bias
Arikan, Çigdem Akin – International Journal of Progressive Education, 2018
The main purpose of this study is to compare the test forms to the midi anchor test and the mini anchor test performance based on item response theory. The research was conducted with using simulated data which were generated based on Rasch model. In order to equate two test forms the anchor item nonequivalent groups (internal anchor test) was…
Descriptors: Equated Scores, Comparative Analysis, Item Response Theory, Tests
Morgan, Grant B.; Moore, Courtney A.; Floyd, Harlee S. – Journal of Psychoeducational Assessment, 2018
Although content validity--how well each item of an instrument represents the construct being measured--is foundational in the development of an instrument, statistical validity is also important to the decisions that are made based on the instrument. The primary purpose of this study is to demonstrate how simulation studies can be used to assist…
Descriptors: Simulation, Decision Making, Test Construction, Validity
Smith, Tamarah; Smith, Samantha – International Journal of Teaching and Learning in Higher Education, 2018
The Research Methods Skills Assessment (RMSA) was created to measure psychology majors' statistics knowledge and skills. The American Psychological Association's Guidelines for the Undergraduate Major in Psychology (APA, 2007, 2013) served as a framework for development. Results from a Rasch analysis with data from n = 330 undergraduates showed…
Descriptors: Psychology, Statistics, Undergraduate Students, Item Response Theory
Kalkan, Ömür Kaya; Kara, Yusuf; Kelecioglu, Hülya – International Journal of Assessment Tools in Education, 2018
Missing data is a common problem in datasets that are obtained by administration of educational and psychological tests. It is widely known that existence of missing observations in data can lead to serious problems such as biased parameter estimates and inflation of standard errors. Most of the missing data imputation methods are focused on…
Descriptors: Item Response Theory, Statistical Analysis, Data, Test Items
Svetina, Dubravka; Levy, Roy – Journal of Experimental Education, 2016
This study investigated the effect of complex structure on dimensionality assessment in compensatory multidimensional item response models using DETECT- and NOHARM-based methods. The performance was evaluated via the accuracy of identifying the correct number of dimensions and the ability to accurately recover item groupings using a simple…
Descriptors: Item Response Theory, Accuracy, Correlation, Sample Size
Kogar, Hakan – International Journal of Assessment Tools in Education, 2018
The aim of this simulation study, determine the relationship between true latent scores and estimated latent scores by including various control variables and different statistical models. The study also aimed to compare the statistical models and determine the effects of different distribution types, response formats and sample sizes on latent…
Descriptors: Simulation, Context Effect, Computation, Statistical Analysis
Verdis, Athanasios; Sotiriou, Christina – International Journal of Music Education, 2018
This study investigates the psychometric characteristics of Gordon's Advanced Measures of Music Audiation (AMMA) in a region with strong non-Western music tradition. It also examines the possibility of measuring audiation with the modern psychometric theory. The AMMA test was administered to 513 students in the city of Ioannina and a number of…
Descriptors: Psychometrics, Music, Correlation, Factor Analysis
Kim, Sooyeon; Moses, Tim; Yoo, Hanwook – Journal of Educational Measurement, 2015
This inquiry is an investigation of item response theory (IRT) proficiency estimators' accuracy under multistage testing (MST). We chose a two-stage MST design that includes four modules (one at Stage 1, three at Stage 2) and three difficulty paths (low, middle, high). We assembled various two-stage MST panels (i.e., forms) by manipulating two…
Descriptors: Comparative Analysis, Item Response Theory, Computation, Accuracy
Karlin, Omar; Karlin, Sayaka – InSight: A Journal of Scholarly Teaching, 2018
This study had two aims. The first was to explain the process of using the Rasch measurement model to validate tests in an easy-to-understand way for those unfamiliar with the Rasch measurement model. The second was to validate two final exams with several shared items. The exams were given to two groups of students with slightly differing English…
Descriptors: Item Response Theory, Test Validity, Test Items, Accuracy