Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 2 |
Descriptor
Source
Applied Psychological… | 1 |
Educational and Psychological… | 1 |
International Association for… | 1 |
North American Chapter of the… | 1 |
Author
Publication Type
Speeches/Meeting Papers | 30 |
Reports - Research | 16 |
Reports - Evaluative | 14 |
Journal Articles | 2 |
Education Level
Higher Education | 1 |
Audience
Researchers | 2 |
Location
Laws, Policies, & Programs
Assessments and Surveys
National Assessment of… | 1 |
Stanford Binet Intelligence… | 1 |
Wechsler Adult Intelligence… | 1 |
What Works Clearinghouse Rating
Nakamura, Yasuyuki; Nishi, Shinnosuke; Muramatsu, Yuta; Yasutake, Koichi; Yamakawa, Osamu; Tagawa, Takahiro – International Association for Development of the Information Society, 2014
In this paper, we introduce a mathematical model for collaborative learning and the answering process for multiple-choice questions. The collaborative learning model is inspired by the Ising spin model and the model for answering multiple-choice questions is based on their difficulty level. An intensive simulation study predicts the possibility of…
Descriptors: Mathematical Models, Cooperative Learning, Multiple Choice Tests, Mathematics Instruction
Inzunsa, Santiago; Mario Romero – North American Chapter of the International Group for the Psychology of Mathematics Education, 2012
This paper reports the results of a research about the strategies and difficulties developed by university students in the process of modeling and simulating of random phenomena in an environment of a spreadsheet. The results indicate that students had difficulties to identify key components of the problems, which are crucial to formulate a…
Descriptors: Simulation, Mathematics Instruction, Spreadsheets, Undergraduate Students

Kim, Seock-Ho; Cohen, Allan S. – Applied Psychological Measurement, 1998
Compared three methods for developing a common metric under item response theory through simulation. For smaller numbers of common items, linking using the characteristic curve method yielded smaller root mean square differences for both item discrimination and difficulty parameters. For larger numbers of common items, the three methods were…
Descriptors: Comparative Analysis, Difficulty Level, Item Response Theory, Simulation

Mazor, Kathleen M.; And Others – Educational and Psychological Measurement, 1994
A variation of the Mantel Haenszel procedure is proposed that improves detection rates of nonuniform differential item functioning (DIF) without increasing the Type I error rate. The procedure, which is illustrated with simulated examinee responses, involves splitting the sample into low- and high-performing groups. (SLD)
Descriptors: Difficulty Level, Identification, Item Analysis, Item Bias
Mazor, Kathleen M.; And Others – 1991
The Mantel-Haenszel (MH) procedure has become one of the most popular procedures for detecting differential item functioning. Valid results with relatively small numbers of examinees represent one of the advantages typically attributed to this procedure. In this study, examinee item responses were simulated to contain differentially functioning…
Descriptors: Difficulty Level, Item Bias, Item Response Theory, Sample Size
Lau, C. Allen; Wang, Tianyou – 1999
A study was conducted to extend the sequential probability ratio testing (SPRT) procedure with the polytomous model under some practical constraints in computerized classification testing (CCT), such as methods to control item exposure rate, and to study the effects of other variables, including item information algorithms, test difficulties, item…
Descriptors: Algorithms, Computer Assisted Testing, Difficulty Level, Item Banks
Zhu, Renbang; Yu, Feng – 2003
To ensure fairness, it is of critical importance that testing programs make sure that essay items given to examinees are equivalent in difficulty. The purpose of this study was to evaluate the stability and accuracy of a logistic regression based polytomous essay difficulty index. Preliminary results from a simulation study (9 conditions with a…
Descriptors: Difficulty Level, Essay Tests, Indexes, Measurement Techniques
Chiu, Christopher W. T. – 2000
A procedure was developed to analyze data with missing observations by extracting data from a sparsely filled data matrix into analyzable smaller subsets of data. This subdividing method, based on the conceptual framework of meta-analysis, was accomplished by creating data sets that exhibit structural designs and then pooling variance components…
Descriptors: Difficulty Level, Error of Measurement, Generalizability Theory, Interrater Reliability
Clauser, Brian E.; And Others – 1991
Item bias has been a major concern for test developers during recent years. The Mantel-Haenszel statistic has been among the preferred methods for identifying biased items. The statistic's performance in identifying uniform bias in simulated data modeled by producing various levels of difference in the (item difficulty) b-parameter for reference…
Descriptors: Comparative Testing, Difficulty Level, Item Bias, Item Response Theory
Groome, Mary Lynn; Groome, William R. – 1979
Angoff's method for identifying possible biased test items was applied to four computer-generated hypothetical tests, two of which contained no biased items and two of which contained a few biased items. The tests were generated to match specifications of a latent trait model. Angoff's method compared item difficulty estimates for two different…
Descriptors: Difficulty Level, Identification, Item Analysis, Mathematical Models
Schnipke, Deborah L. – 2002
A common practice in some certification fields (e.g., information technology) is to draw items from an item pool randomly and apply a common passing score, regardless of the items administered. Because these tests are commonly used, it is important to determine how accurate the pass/fail decisions are for such tests and whether fairly small,…
Descriptors: Decision Making, Difficulty Level, Item Banks, Pass Fail Grading
Gershon, Richard; Bergstrom, Betty – 1995
When examinees are allowed to review responses on an adaptive test, can they "cheat" the adaptive algorithm in order to take an easier test and improve their performance? Theoretically, deliberately answering items incorrectly will lower the examinee ability estimate and easy test items will be administered. If review is then allowed,…
Descriptors: Adaptive Testing, Algorithms, Cheating, Computer Assisted Testing

Kirisci, Levent; Hsu, Tse-Chi – 1995
The main goal of this study was to assess how sensitive unidimensional parameter estimates derived from BILOG were when the unidimensionality assumption was violated and the underlying ability distribution was not multivariate normal. A multidimensional three-parameter logistic distribution that was a straightforward generalization of the…
Descriptors: Ability, Comparative Analysis, Correlation, Difficulty Level
Schnipke, Deborah L. – 1996
When running out of time on a multiple-choice test, some examinees are likely to respond rapidly to the remaining unanswered items in an attempt to get some items right by chance. Because these responses will tend to be incorrect, the presence of "rapid-guessing behavior" could cause these items to appear to be more difficult than they…
Descriptors: Difficulty Level, Estimation (Mathematics), Guessing (Tests), Item Response Theory
Tang, Huixing – 1994
A method is presented for the simultaneous analysis of differential item functioning (DIF) in multi-factor situations. The method is unique in that it combines item response theory (IRT) and analysis of variance (ANOVA), takes a simultaneous approach to multifactor DIF analysis, and is capable of capturing interaction and controlling for possible…
Descriptors: Ability, Analysis of Variance, Difficulty Level, Error of Measurement
Previous Page | Next Page ยป
Pages: 1 | 2