NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 16 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Chen, Binglin; West, Matthew; Ziles, Craig – International Educational Data Mining Society, 2018
This paper attempts to quantify the accuracy limit of "nextitem-correct" prediction by using numerical optimization to estimate the student's probability of getting each question correct given a complete sequence of item responses. This optimization is performed without an explicit parameterized model of student behavior, but with the…
Descriptors: Accuracy, Probability, Student Behavior, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Han-Chin; Chuang, Hsueh-Hua – Interactive Learning Environments, 2011
This study investigated how the format of verbal instructions in computer simulations and prior knowledge (PK) affected 8th graders' cognitive load (CL) level and achievement in a multimedia learning environment. Although PK was not found to significantly affect student performance and CL level, instruction format was found to impact both.…
Descriptors: Electronic Learning, Instructional Design, Prior Learning, Grade 8
Ackerman, Terry A. – 1987
Concern has been expressed over the item response theory (IRT) assumption that a person's ability can be estimated in a unidimensional latent space. To examine whether or not the response to an item requires only a single latent ability, unidimensional ability estimates were compared for data generated from the multidimensional item response…
Descriptors: Ability, Computer Simulation, Difficulty Level, Item Analysis
Peer reviewed Peer reviewed
Rosenbaum, Paul R. – Psychometrika, 1987
This paper develops and applies three nonparametric comparisons of the shapes of two item characteristic surfaces: (1) proportional latent odds; (2) uniform relative difficulty; and (3) item sensitivity. A method is presented for comparing the relative shapes of two item characteristic curves in two examinee populations who were administered an…
Descriptors: Comparative Analysis, Computer Simulation, Difficulty Level, Item Analysis
Lecointe, Darius A. – 1995
The purpose of this Item Response Theory study was to investigate how the expected reduction in item information, due to the collapsing of response categories in performance assessment data, was affected by varying testing conditions: item difficulty, item discrimination, inter-rater reliability, and direction of collapsing. The investigation used…
Descriptors: Classification, Computer Simulation, Difficulty Level, Interrater Reliability
Zeng, Lingjia; Bashaw, Wilbur L. – 1990
A joint maximum likelihood estimation algorithm, based on the partial compensatory multidimensional logistic model (PCML) proposed by L. Zeng (1989), is presented. The algorithm simultaneously estimates item difficulty parameters, the strength of each dimension, and individuals' abilities on each of the dimensions involved in arriving at a correct…
Descriptors: Ability Identification, Algorithms, Computer Simulation, Difficulty Level
Peer reviewed Peer reviewed
Liou, Michelle – Applied Psychological Measurement, 1988
In applying I. I. Bejar's method for detecting the dimensionality of achievement tests, researchers should be cautious in interpreting the slope of the principal axis. Other information from the data is needed in conjunction with Bejar's method of addressing item dimensionality. (SLD)
Descriptors: Achievement Tests, Computer Simulation, Difficulty Level, Equated Scores
Reinhardt, Brian M. – 1991
Factors affecting a lower-bound estimate of internal consistency reliability, Cronbach's coefficient alpha, are explored. Theoretically, coefficient alpha is an estimate of the correlation between two tests drawn at random from a pool of items like the items in the test under consideration. As a practical matter, coefficient alpha can be an index…
Descriptors: Computer Simulation, Correlation, Difficulty Level, Estimation (Mathematics)
Peer reviewed Peer reviewed
Liou, Michelle; Chang, Chih-Hsin – Psychometrika, 1992
An extension is proposed for the network algorithm introduced by C.R. Mehta and N.R. Patel to construct exact tail probabilities for testing the general hypothesis that item responses are distributed according to the Rasch model. A simulation study indicates the efficiency of the algorithm. (SLD)
Descriptors: Algorithms, Computer Simulation, Difficulty Level, Equations (Mathematics)
Drasgow, Fritz; Parsons, Charles K. – 1982
The effects of a multidimensional latent trait space on estimation of item and person parameters by the computer program LOGIST are examined. Several item pools were simulated that ranged from truly unidimensional to an inconsequential general latent trait. Item pools with intermediate levels of prepotency of the general latent trait were also…
Descriptors: Computer Simulation, Computer Software, Difficulty Level, Item Analysis
Gilmer, Jerry S. – 1987
The proponents of test disclosure argue that disclosure is a matter of fairness; the opponents argue that fairness is enhanced by score equating which is dependent on test security. This research simulated disclosure on a professional licensing examination by placing response keys to selected items in some examinees' records, and comparing their…
Descriptors: Adults, Answer Keys, Computer Simulation, Cutting Scores
Peer reviewed Peer reviewed
Dodd, Barbara G.; And Others – Educational and Psychological Measurement, 1993
Effects of the following variables on performance of computerized adaptive testing (CAT) procedures for the partial credit model (PCM) were studied: (1) stopping rule for terminating CAT; (2) item pool size; and (3) distribution of item difficulties. Implications of findings for CAT systems based on the PCM are discussed. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Computer Simulation, Difficulty Level
Muraki, Eiji – 1984
The TESTFACT computer program and full-information factor analysis of test items were used in a computer simulation conducted to correct for the guessing effect. Full-information factor analysis also corrects for omitted items. The present version of TESTFACT handles up to five factors and 150 items. A preliminary smoothing of the tetrachoric…
Descriptors: Comparative Analysis, Computer Simulation, Computer Software, Correlation
Samejima, Fumiko – 1986
Item analysis data fitting the normal ogive model were simulated in order to investigate the problems encountered when applying the three-parameter logistic model. Binary item tests containing 10 and 35 items were created, and Monte Carlo methods simulated the responses of 2,000 and 500 examinees. Item parameters were obtained using Logist 5.…
Descriptors: Computer Simulation, Difficulty Level, Guessing (Tests), Item Analysis
Ackerman, Terry A. – 1987
The purpose of this study was to investigate the effect of using multidimensional items in a computer adaptive test (CAT) setting which assumes a unidimensional item response theory (IRT) framework. Previous research has suggested that the composite of multidimensional abilities being estimated by a unidimensional IRT model is not constant…
Descriptors: Adaptive Testing, College Entrance Examinations, Computer Assisted Testing, Computer Simulation
Previous Page | Next Page ยป
Pages: 1  |  2