NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 14 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Brückner, Sebastian; Pellegrino, James W. – Journal of Educational Measurement, 2016
The Standards for Educational and Psychological Testing indicate that validation of assessments should include analyses of participants' response processes. However, such analyses typically are conducted only to supplement quantitative field studies with qualitative data, and seldom are such data connected to quantitative data on student or item…
Descriptors: Hierarchical Linear Modeling, Test Validity, Statistical Analysis, College Students
Peer reviewed Peer reviewed
Direct linkDirect link
Tendeiro, Jorge N.; Meijer, Rob R. – Journal of Educational Measurement, 2014
In recent guidelines for fair educational testing it is advised to check the validity of individual test scores through the use of person-fit statistics. For practitioners it is unclear on the basis of the existing literature which statistic to use. An overview of relatively simple existing nonparametric approaches to identify atypical response…
Descriptors: Educational Assessment, Test Validity, Scores, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Chen, Jinsong; de la Torre, Jimmy; Zhang, Zao – Journal of Educational Measurement, 2013
As with any psychometric models, the validity of inferences from cognitive diagnosis models (CDMs) determines the extent to which these models can be useful. For inferences from CDMs to be valid, it is crucial that the fit of the model to the data is ascertained. Based on a simulation study, this study investigated the sensitivity of various fit…
Descriptors: Models, Psychometrics, Goodness of Fit, Statistical Analysis
Peer reviewed Peer reviewed
Novick, Melvin R.; Lindley, Dennis V. – Journal of Educational Measurement, 1978
The use of some very simple loss or utility functions in educational evaluation has recently been advocated by Gross and Su, Petersen and Novick, and Petersen. This paper demonstrates that more realistic utility functions can easily be used and may be preferable in some applications. (Author/CTM)
Descriptors: Bayesian Statistics, Cost Effectiveness, Mathematical Models, Statistical Analysis
Peer reviewed Peer reviewed
D'Agostino, Ralph B. – Journal of Educational Measurement, 1971
Relations between standard statistical techniques for analyzing dichotomous data and ANOVA procedures are indicated. The need for usefulness of analyzing transformed data as opposed to direct analysis of dichotomous data are discussed. Required statistical procedures employing transformed data are outlined. (Author/AG)
Descriptors: Analysis of Variance, Data Analysis, Interaction, Sampling
Peer reviewed Peer reviewed
Pyrczak, Fred – Journal of Educational Measurement, 1973
Despite the numerous individual illustrations in the literature showing how the discrimination index may be used to identify items with faults, its overall effectiveness as a measure of item quality, defined in terms of the presence or absence of faults, is not clear. This study investigates its validity. (Author/RK)
Descriptors: Correlation, Discriminant Analysis, Item Banks, Rating Scales
Peer reviewed Peer reviewed
Linn, Robert L. – Journal of Educational Measurement, 1984
The common approach to studies of predictive bias is analyzed within the context of a conceptual model in which predictors and criterion measures are viewed as fallible indicators of idealized qualifications. (Author/PN)
Descriptors: Certification, Models, Predictive Measurement, Predictive Validity
Peer reviewed Peer reviewed
Beuchert, A. Kent; Mendoza, Jorge L. – Journal of Educational Measurement, 1979
Ten item discrimination indices, across a variety of item analysis situations, were compared, based on the validities of tests constructed by using each of the indices to select 40 items from a 100-item pool. Item score data were generated by a computer program and included a simulation of guessing. (Author/CTM)
Descriptors: Item Analysis, Simulation, Statistical Analysis, Test Construction
Peer reviewed Peer reviewed
Lomax, Richard G.; Algina, James – Journal of Educational Measurement, 1979
Results of using multimethod factor analysis and exploratory factor analysis for the analysis of three multitrait-multimethod matrices are compared. Results suggest that the two methods can give quite different impressions of discriminant validity. In the examples considered, the former procedure tends to support discrimination while the latter…
Descriptors: Comparative Analysis, Factor Analysis, Goodness of Fit, Matrices
Peer reviewed Peer reviewed
Winne, Philip H.; Belfry, M. Joan – Journal of Educational Measurement, 1982
This review of issues about correcting for attenuation concludes that the basic difficulty lies in being able to identify and equate sources of variance in estimates of validity and reliability. Recommendations are proposed for cautious use of correction for attenuation. (Author/CM)
Descriptors: Correlation, Error of Measurement, Research Methodology, Statistical Analysis
Peer reviewed Peer reviewed
Kolen, Michael J.; Whitney, Douglas R. – Journal of Educational Measurement, 1978
Nine methods of smoothing double-entry expectancy tables (tables that relate two predictor variables to probability of attaining success on a criterion) were compared using data for entering students at 85 colleges and universities. The smoothed tables were more accurate than those based on observed relative frequencies. (Author/CTM)
Descriptors: College Entrance Examinations, Expectancy Tables, Grade Prediction, High Schools
Peer reviewed Peer reviewed
Stanley, Julian C. – Journal of Educational Measurement, 1987
Meta-analysis of research on a topic may exclude reports that do not present their results in a statistical form amendable to summarizing procedures used. This article illustrates how researchers can consider this aspect in computing statistics from their data. An illustration of how that can sometimes be done is presented. (Author/JAZ)
Descriptors: College Entrance Examinations, Correlation, Mathematics Tests, Meta Analysis
Peer reviewed Peer reviewed
Wang, Tianyou; Kolen, Michael J. – Journal of Educational Measurement, 2001
Reviews research literature on comparability issues in computerized adaptive testing (CAT) and synthesizes issues specific to comparability and test security. Develops a framework for evaluating comparability that contains three categories of criteria: (1) validity; (2) psychometric property/reliability; and (3) statistical assumption/test…
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Criteria
Peer reviewed Peer reviewed
Frary, Robert B. – Journal of Educational Measurement, 1985
Responses to a sample test were simulated for examinees under free-response and multiple-choice formats. Test score sets were correlated with randomly generated sets of unit-normal measures. The extent of superiority of free response tests was sufficiently small so that other considerations might justifiably dictate format choice. (Author/DWH)
Descriptors: Comparative Analysis, Computer Simulation, Essay Tests, Guessing (Tests)