NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers5
Location
Japan1
Taiwan1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 19 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Fuchimoto, Kazuma; Ishii, Takatoshi; Ueno, Maomi – IEEE Transactions on Learning Technologies, 2022
Educational assessments often require uniform test forms, for which each test form has equivalent measurement accuracy but with a different set of items. For uniform test assembly, an important issue is the increase of the number of assembled uniform tests. Although many automatic uniform test assembly methods exist, the maximum clique algorithm…
Descriptors: Simulation, Efficiency, Test Items, Educational Assessment
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Saatcioglu, Fatima Munevver; Atar, Hakan Yavuz – International Journal of Assessment Tools in Education, 2022
This study aims to examine the effects of mixture item response theory (IRT) models on item parameter estimation and classification accuracy under different conditions. The manipulated variables of the simulation study are set as mixture IRT models (Rasch, 2PL, 3PL); sample size (600, 1000); the number of items (10, 30); the number of latent…
Descriptors: Accuracy, Classification, Item Response Theory, Programming Languages
Peer reviewed Peer reviewed
Direct linkDirect link
Ames, Allison J.; Leventhal, Brian C.; Ezike, Nnamdi C. – Measurement: Interdisciplinary Research and Perspectives, 2020
Data simulation and Monte Carlo simulation studies are important skills for researchers and practitioners of educational and psychological measurement, but there are few resources on the topic specific to item response theory. Even fewer resources exist on the statistical software techniques to implement simulation studies. This article presents…
Descriptors: Monte Carlo Methods, Item Response Theory, Simulation, Computer Software
Peer reviewed Peer reviewed
Direct linkDirect link
Legacy, Chelsey; Zieffler, Andrew; Fry, Elizabeth Brondos; Le, Laura – Statistics Education Research Journal, 2022
The influx of data and the advances in computing have led to calls to update the introductory statistics curriculum to better meet the needs of the contemporary workforce. To this end, we developed the COMputational Practices in Undergraduate TEaching of Statistics (COMPUTES) instrument, which can be used to measure the extent to which computation…
Descriptors: Statistics Education, Introductory Courses, Undergraduate Students, Teaching Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Pelánek, Radek; Effenberger, Tomáš; Kukucka, Adam – Journal of Educational Data Mining, 2022
We study the automatic identification of educational items worthy of content authors' attention. Based on the results of such analysis, content authors can revise and improve the content of learning environments. We provide an overview of item properties relevant to this task, including difficulty and complexity measures, item discrimination, and…
Descriptors: Item Analysis, Identification, Difficulty Level, Case Studies
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Aybek, Eren Can; Demirtasli, R. Nukhet – International Journal of Research in Education and Science, 2017
This article aims to provide a theoretical framework for computerized adaptive tests (CAT) and item response theory models for polytomous items. Besides that, it aims to introduce the simulation and live CAT software to the related researchers. Computerized adaptive test algorithm, assumptions of item response theory models, nominal response…
Descriptors: Computer Assisted Testing, Adaptive Testing, Item Response Theory, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Yang, Ji Seung; Zheng, Xiaying – Journal of Educational and Behavioral Statistics, 2018
The purpose of this article is to introduce and review the capability and performance of the Stata item response theory (IRT) package that is available from Stata v.14, 2015. Using a simulated data set and a publicly available item response data set extracted from Programme of International Student Assessment, we review the IRT package from…
Descriptors: Item Response Theory, Item Analysis, Computer Software, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Ishii, Takatoshi; Songmuang, Pokpong; Ueno, Maomi – IEEE Transactions on Learning Technologies, 2014
Educational assessments occasionally require uniform test forms for which each test form comprises a different set of items, but the forms meet equivalent test specifications (i.e., qualities indicated by test information functions based on item response theory). We propose two maximum clique algorithms (MCA) for uniform test form assembly. The…
Descriptors: Simulation, Efficiency, Test Items, Educational Assessment
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Zhang, Mo; Breyer, F. Jay; Lorenz, Florian – ETS Research Report Series, 2013
In this research, we investigated the suitability of implementing "e-rater"® automated essay scoring in a high-stakes large-scale English language testing program. We examined the effectiveness of generic scoring and 2 variants of prompt-based scoring approaches. Effectiveness was evaluated on a number of dimensions, including agreement…
Descriptors: Computer Assisted Testing, Computer Software, Scoring, Language Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Yurdugul, Halil – Applied Psychological Measurement, 2009
This article describes SIMREL, a software program designed for the simulation of alpha coefficients and the estimation of its confidence intervals. SIMREL runs on two alternatives. In the first one, if SIMREL is run for a single data file, it performs descriptive statistics, principal components analysis, and variance analysis of the item scores…
Descriptors: Intervals, Monte Carlo Methods, Computer Software, Factor Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Hu, Huiqin; Rogers, W. Todd; Vukmirovic, Zarko – Applied Psychological Measurement, 2008
Common items with inconsistent b-parameter estimates may have a serious impact on item response theory (IRT)--based equating results. To find a better way to deal with the outlier common items with inconsistent b-parameters, the current study investigated the comparability of 10 variations of four IRT-based equating methods (i.e., concurrent…
Descriptors: Item Response Theory, Item Analysis, Computer Simulation, Equated Scores
Peer reviewed Peer reviewed
Yen, Wendy M. – Psychometrika, 1987
Comparisons are made between BILOG version 2.2 and LOGIST 5.0 version 2.5 in estimating the item parameters, traits, item characteristic functions, and test characteristic functions for the three-parameter logistic model. Speed and accuracy are reported for a number of 10, 20, and 40-item tests. (Author/GDC)
Descriptors: Comparative Analysis, Computer Simulation, Computer Software, Item Analysis
Peer reviewed Peer reviewed
Collins, Linda M.; And Others – Multivariate Behavioral Research, 1986
The present study compares the performance of phi coefficients and tetrachorics along two dimensions of factor recovery in binary data. These dimensions are (1) accuracy of nontrivial factor identifications; and (2) factor structure recovery given a priori knowledge of the correct number of factors to rotate. (Author/LMO)
Descriptors: Computer Software, Factor Analysis, Factor Structure, Item Analysis
Thissen, David; And Others – 1984
This report documents a computer program for simulation evaluation of item response theory (IRT) ability estimators. This Honeywell DPS version is a very slight modification of the program originally developed in FORTRAN-77 on a DEC VAX 11/780. The model of the world simulated by the program involves a unidimensional test in which the probability…
Descriptors: Academic Ability, Computer Simulation, Computer Software, Estimation (Mathematics)
Samejima, Fumiko – 1984
In order to evaluate our methods and approaches of estimating the operating characteristics of discrete item responses, it is necessary to try other comparable methods on similar sets of data. LOGIST 5 was taken up for this reason, and was tried upon the hypothetical test items, which follow the normal ogive model and were used frequently in…
Descriptors: Computer Simulation, Computer Software, Estimation (Mathematics), Item Analysis
Previous Page | Next Page »
Pages: 1  |  2