NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 12 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Kaufmann, Esther; Budescu, David V. – Journal of Educational Measurement, 2020
The literature suggests that simple expert (mathematical) models can improve the quality of decisions, but people are not always eager to accept and endorse such models. We ran three online experiments to test the receptiveness to advice from computerized expert models. Middle- and high-school teachers (N = 435) evaluated student profiles that…
Descriptors: Mathematical Models, Computer Uses in Education, Artificial Intelligence, Expertise
Peer reviewed Peer reviewed
Direct linkDirect link
Joo, Seang-Hwane; Lee, Philseok; Stark, Stephen – Journal of Educational Measurement, 2018
This research derived information functions and proposed new scalar information indices to examine the quality of multidimensional forced choice (MFC) items based on the RANK model. We also explored how GGUM-RANK information, latent trait recovery, and reliability varied across three MFC formats: pairs (two response alternatives), triplets (three…
Descriptors: Item Response Theory, Models, Item Analysis, Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Skaggs, Gary; Hein, Serge F.; Wilkins, Jesse L. M. – Journal of Educational Measurement, 2016
This article introduces the Diagnostic Profiles (DP) standard setting method for setting a performance standard on a test developed from a cognitive diagnostic model (CDM), the outcome of which is a profile of mastered and not-mastered skills or attributes rather than a single test score. In the DP method, the key judgment task for panelists is a…
Descriptors: Models, Standard Setting, Profiles, Diagnostic Tests
Peer reviewed Peer reviewed
Kalohn, John C.; Spray, Judith A. – Journal of Educational Measurement, 1999
Examined the effects of model misspecification on the precision of decisions made using the sequential probability ratio test (SPRT) in computer testing. Simulation results show that the one-parameter logistic model produced more errors than the true model. (SLD)
Descriptors: Classification, Computer Assisted Testing, Decision Making, Models
Peer reviewed Peer reviewed
Sawyer, Richard L.; And Others – Journal of Educational Measurement, 1976
This article examines some of the values that might be considered in a selection situation within the context of a decision theoretic model also described here. Several alternate expressions of fair selection are suggested in the form of utility statements in which these values can be understood and compared. (Author/DEP)
Descriptors: Bias, Decision Making, Evaluation Criteria, Models
Peer reviewed Peer reviewed
Huynh, Huynh – Journal of Educational Measurement, 1976
Within the beta-binomial Bayesian framework, procedures are described for the evaluation of the kappa index of reliability on the basis of one administration of a domain-referenced test. Major factors affecting this index include cutoff score, test score variability and test length. Empirical data which substantiate some theoretical trends deduced…
Descriptors: Criterion Referenced Tests, Decision Making, Mathematical Models, Probability
Peer reviewed Peer reviewed
Petersen, Nancy S.; Novick, Melvin R. – Journal of Educational Measurement, 1976
Compares and evaluates models for bias in selection. Strategies are compared and evaluated as to their advantages and disadvantages in the areas of business and education. Some suggested formats for establishing culture fair selection are felt, by the authors, to be inadequate for their task and require a more complex analysis. (Author/DEP)
Descriptors: Bias, Culture Fair Tests, Decision Making, Evaluation Criteria
Peer reviewed Peer reviewed
Linn, Robert L. – Journal of Educational Measurement, 1976
Discusses some models, including the Petersen Novick Model (TM 502 259) regarding fair selection procedures. (DEP)
Descriptors: Bias, Decision Making, Evaluation Criteria, Models
Peer reviewed Peer reviewed
Novick, Melvin R.; Petersen, Nancy S. – Journal of Educational Measurement, 1976
The authors comment and provide an updated statement of their views on the four preceding articles which deal with the fair use of tests in educational and employment selection. (Author/DEP)
Descriptors: Affirmative Action, Bias, Culture Fair Tests, Decision Making
Peer reviewed Peer reviewed
Wilcox, Rand R.; And Others – Journal of Educational Measurement, 1988
The second response conditional probability model of decision-making strategies used by examinees answering multiple choice test items was revised. Increasing the number of distractors or providing distractors giving examinees (N=106) the option to follow the model improved results and gave a good fit to data for 29 of 30 items. (SLD)
Descriptors: Cognitive Tests, Decision Making, Mathematical Models, Multiple Choice Tests
Peer reviewed Peer reviewed
Hambleton, Ronald K.; Novick, Melvin R. – Journal of Educational Measurement, 1973
In this paper, an attempt has been made to synthesize some of the current thinking in the area of criterion-referenced testing as well as to provide the beginning of an integration of theory and method for such testing. (Editor)
Descriptors: Bayesian Statistics, Criterion Referenced Tests, Decision Making, Definitions
Peer reviewed Peer reviewed
Swaminathan, H.; And Others – Journal of Educational Measurement, 1975
A decision-theoretic procedure is outlined which provides a framework within which Bayesian statistical methods can be employed with criterion-referenced tests to improve the quality of decision making in objectives based instructional programs. (Author/DEP)
Descriptors: Bayesian Statistics, Computer Assisted Instruction, Criterion Referenced Tests, Decision Making