NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)2
Since 2006 (last 20 years)4
Source
Journal of Educational…111
Education Level
Secondary Education1
Audience
Researchers2
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 111 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Kaufmann, Esther; Budescu, David V. – Journal of Educational Measurement, 2020
The literature suggests that simple expert (mathematical) models can improve the quality of decisions, but people are not always eager to accept and endorse such models. We ran three online experiments to test the receptiveness to advice from computerized expert models. Middle- and high-school teachers (N = 435) evaluated student profiles that…
Descriptors: Mathematical Models, Computer Uses in Education, Artificial Intelligence, Expertise
Peer reviewed Peer reviewed
Direct linkDirect link
Olsen, Jennifer; Aleven, Vincent; Rummel, Nikol – Journal of Educational Measurement, 2017
Within educational data mining, many statistical models capture the learning of students working individually. However, not much work has been done to extend these statistical models of individual learning to a collaborative setting, despite the effectiveness of collaborative learning activities. We extend a widely used model (the additive factors…
Descriptors: Mathematical Models, Information Retrieval, Data Analysis, Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Harik, Polina; Clauser, Brian E.; Grabovsky, Irina; Nungester, Ronald J.; Swanson, Dave; Nandakumar, Ratna – Journal of Educational Measurement, 2009
The present study examined the long-term usefulness of estimated parameters used to adjust the scores from a performance assessment to account for differences in rater stringency. Ratings from four components of the USMLE[R] Step 2 Clinical Skills Examination data were analyzed. A generalizability-theory framework was used to examine the extent to…
Descriptors: Generalizability Theory, Performance Based Assessment, Performance Tests, Clinical Experience
Peer reviewed Peer reviewed
Direct linkDirect link
Cahan, Sorel; Gamliel, Eyal – Journal of Educational Measurement, 2006
Despite its intuitive appeal and popularity, Thorndike's constant ratio (CR) model for unbiased selection is inherently inconsistent in "n"-free selection. Satisfaction of the condition for unbiased selection, when formulated in terms of success/acceptance probabilities, usually precludes satisfaction by the converse probabilities of…
Descriptors: Probability, Bias, Mathematical Concepts, Mathematical Models
Peer reviewed Peer reviewed
Cardinet, Jean; And Others – Journal of Educational Measurement, 1976
When research focuses on the conditions of measurement, the dimensions of the measurement design should be transposed to differentiate conditions while generalizing over persons. To clarify this transposition, the notions of face of differentiation and face of generalization are introduced as complementary aspects of the design. An example is…
Descriptors: Generalization, Mathematical Models, Research Design, Statistical Analysis
Peer reviewed Peer reviewed
Marsh, Herbert W. – Journal of Educational Measurement, 1993
Structural equation models of the same construct collected on different occasions are evaluated in 2 studies involving the evaluation of 157 college instructors over 8 years and data for over 2,200 high school students over 4 years for the Youth in Transition Study. Results challenge overreliance on simplex models. (SLD)
Descriptors: College Faculty, Comparative Analysis, High School Students, High Schools
Peer reviewed Peer reviewed
Alexander, Ralph A. – Journal of Educational Measurement, 1990
This note shows that the formula suggested by N. D. Bryant and S. Gokhale (1972) for correcting indirectly restricted correlations when no information is available on the third (directly restricted) variable is accurate only in one special instance. A more general correction formula is illustrated. (SLD)
Descriptors: Correlation, Equations (Mathematics), Mathematical Models, Selection
Peer reviewed Peer reviewed
Novick, Melvin R.; Lindley, Dennis V. – Journal of Educational Measurement, 1978
The use of some very simple loss or utility functions in educational evaluation has recently been advocated by Gross and Su, Petersen and Novick, and Petersen. This paper demonstrates that more realistic utility functions can easily be used and may be preferable in some applications. (Author/CTM)
Descriptors: Bayesian Statistics, Cost Effectiveness, Mathematical Models, Statistical Analysis
Peer reviewed Peer reviewed
Brennan, Robert L.; Kane, Michael T. – Journal of Educational Measurement, 1977
An index for the dependability of mastery tests is described. Assumptions necessary for the index and the mathematical development of the index are provided. (Author/JKS)
Descriptors: Criterion Referenced Tests, Mastery Tests, Mathematical Models, Test Reliability
Peer reviewed Peer reviewed
Gardner, P. L. – Journal of Educational Measurement, 1970
Descriptors: Error of Measurement, Mathematical Models, Statistical Analysis, Test Reliability
Peer reviewed Peer reviewed
Waller, Michael I. – Journal of Educational Measurement, 1981
A method based on the likelihood ratio procedure is presented for use in selecting a measurement model from among the Rasch, two-parameter, and three-parameter logistic latent trait models. (Author/BW)
Descriptors: Comparative Analysis, Goodness of Fit, Latent Trait Theory, Mathematical Models
Peer reviewed Peer reviewed
Wilcox, Rand R. – Journal of Educational Measurement, 1982
A new model for measuring misinformation is suggested. A modification of Wilcox's strong true-score model, to be used in certain situations, is indicated, since it solves the problem of correcting for guessing without assuming guessing is random. (Author/GK)
Descriptors: Achievement Tests, Guessing (Tests), Mathematical Models, Scoring Formulas
Peer reviewed Peer reviewed
Huynh, Huynh – Journal of Educational Measurement, 1976
Within the beta-binomial Bayesian framework, procedures are described for the evaluation of the kappa index of reliability on the basis of one administration of a domain-referenced test. Major factors affecting this index include cutoff score, test score variability and test length. Empirical data which substantiate some theoretical trends deduced…
Descriptors: Criterion Referenced Tests, Decision Making, Mathematical Models, Probability
Peer reviewed Peer reviewed
Subkoviak, Michael J. – Journal of Educational Measurement, 1976
A number of different reliability coefficients have recently been proposed for tests used to differentiate between groups such as masters and nonmasters. One promising index is the proportion of students in a class that are consistently assigned to the same mastery group across two testings. The present paper proposes a single test administration…
Descriptors: Criterion Referenced Tests, Mastery Tests, Mathematical Models, Probability
Peer reviewed Peer reviewed
Peng, Chao-Ying, J.; Subkoviak, Michael J. – Journal of Educational Measurement, 1980
Huynh (1976) suggested a method of approximating the reliability coefficient of a mastery test. The present study examines the accuracy of Huynh's approximation and also describes a computationally simpler approximation which appears to be generally more accurate than the former. (Author/RL)
Descriptors: Error of Measurement, Mastery Tests, Mathematical Models, Statistical Analysis
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8