NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 5,101 to 5,115 of 9,547 results Save | Export
Peer reviewed Peer reviewed
Parshall, Cynthia G.; Miller, Timothy R. – Journal of Educational Measurement, 1995
Exact testing was evaluated as a method for conducting Mantel-Haenszel differential item functioning (DIF) analyses with relatively small samples. A series of computer simulations found that the asymptotic Mantel-Haenszel and the exact method yielded very similar results across sample size, levels of DIF, and data sets. (SLD)
Descriptors: Comparative Analysis, Computer Simulation, Identification, Item Bias
Peer reviewed Peer reviewed
Bacon, Donald R.; And Others – Educational and Psychological Measurement, 1995
The potential for bias in reliability estimation and for errors in item selection when alpha or unit-weighted omega coefficients are used is explored under simulated conditions. Results suggest that composite reliability may be an assessment tool but should not be an item selection tool in structural equations. (SLD)
Descriptors: Bias, Estimation (Mathematics), Reliability, Selection
Peer reviewed Peer reviewed
Ackerman, Terry A.; Evans, John A. – Applied Psychological Measurement, 1994
The effect of the conditioning score on the results of differential item functioning (DIF) analysis was examined with simulated data. The study demonstrates that results of DIF that rely on a conditioning score can be quite different depending on the conditioning variable that is selected. (SLD)
Descriptors: Construct Validity, Identification, Item Bias, Selection
Peer reviewed Peer reviewed
Engelhard, George, Jr. – Educational and Psychological Measurement, 1992
A historical perspective is provided of the concept of invariance in measurement theory, describing sample-invariant item calibration and item-invariant measurement of individuals. Invariance as a key measurement concept is illustrated through the measurement theories of E. L. Thorndike, L. L. Thurstone, and G. Rasch. (SLD)
Descriptors: Behavioral Sciences, Educational History, Measurement Techniques, Psychometrics
Peer reviewed Peer reviewed
Matthews, Margaret – Reading in a Foreign Language, 1990
Presents critical analysis of a paper "Testing Reading Comprehension Skills, Part One," in which the consideration concerns the inadequacy of taxonomies of skills to describe individual readers' processes and, hence, their usefulness in test construction. (15 references) (GLR)
Descriptors: Classification, Evaluation, Reading Comprehension, Second Language Learning
Peer reviewed Peer reviewed
Oshima, T. C.; Miller, M. David – Applied Psychological Measurement, 1992
How item bias indexes based on item response theory (IRT) identify bias that results from multidimensionality is demonstrated. Simulation results suggest that IRT-based bias indexes detect multidimensional items with bias but do not detect multidimensional items without bias. They also do not confound between-group differences on the primary test.…
Descriptors: Computer Simulation, Item Bias, Item Response Theory, Mathematical Models
Peer reviewed Peer reviewed
Muraki, Eiji – Applied Psychological Measurement, 1993
The concept of information functions developed for dichotomous item response models is adapted for the partial credit model, and the information function is used to investigate collapsing and recoding categories of polytomously scored items from the National Assessment of Educational Progress. (SLD)
Descriptors: Equations (Mathematics), Item Response Theory, National Surveys, Psychometrics
Peer reviewed Peer reviewed
Kuder, Frederic; Diamond, Esther E.; Zytowski, Donald G. – Educational and Psychological Measurement, 1998
Predictive validity, generally taken to be the prime validity that occupationally normed interest inventories should demonstrate, is dependent on the capacity of an instrument to differentiate between occupations. A comparison of two methods of differentiation shows that a method using proportions of each occupational group to assign item-scoring…
Descriptors: Interest Inventories, Occupational Tests, Predictive Measurement, Predictive Validity
Peer reviewed Peer reviewed
Bradlow, Eric T.; Thomas, Neal – Journal of Educational and Behavioral Statistics, 1998
A set of conditions is presented for the validity of inference for Item Response Theory (IRT) models applied to data collected from examinations that allow students to choose a subset of items. Common low-dimensional IRT models estimated by standard methods do not resolve the difficult problems posed by choice-based data. (SLD)
Descriptors: Inferences, Item Response Theory, Models, Selection
Peer reviewed Peer reviewed
Katz, Irvin R.; Martinez, Michael E.; Sheehan, Kathleen M.; Tatsuoka, Kikumi K. – Journal of Educational and Behavioral Statistics, 1998
A technique is presented for applying the Rule Space methodology of cognitive diagnosis to assessment in a semantically rich domain. The approach bases diagnosis on item characteristics that are more abstract than individual problem-solving steps. The method is illustrated through a test of architectural knowledge completed by 122 architects. (SLD)
Descriptors: Architects, Architecture, Cognitive Tests, Diagnostic Tests
Peer reviewed Peer reviewed
Powers, Donald E.; Bennett, Randy Elliot – Applied Measurement in Education, 1999
Explored how allowing examinees to select test questions affected examinee performance and test characteristics for a measure of ability to generate hypotheses about a situation. Results with 2,429 examinees who elected the choice condition on the Graduate Record Examination suggest that items are differentially attractive to examinees. (SLD)
Descriptors: Ability, College Students, Higher Education, Responses
Peer reviewed Peer reviewed
Higgins, N. C.; Zumbo, Bruno D.; Hay, Jana L. – Educational and Psychological Measurement, 1999
Confirmatory factor analysis of data from 1,346 respondents to the Attributional Style Questionnaire (ASQ) (C. Peterson and others, 1982) reveals that adequate fit is provided by a three-factor attributional style model that includes context-dependent item sets. Results suggest that there is no such thing as a nonsituational attributional style.…
Descriptors: Adults, Attribution Theory, Construct Validity, Context Effect
Peer reviewed Peer reviewed
Perlow, Richard; Moore, D. De Wayne; Kyle, Rebecca; Killen, Thomas – Educational and Psychological Measurement, 1999
Examined a set of working memory scales containing two versions of test items that are reading and mathematics based. Data from 201 undergraduates support the hypothesis that an oblique two-factor model in which the factors are based on item content would fit the data well. (SLD)
Descriptors: Factor Structure, Higher Education, Mathematics, Models
Peer reviewed Peer reviewed
Nandakumar, Ratna; Yu, Feng; Li, Hsin-Hung; Stout, William – Applied Psychological Measurement, 1998
Investigated the performance of the Poly-DIMTEST (PD) procedure (and associated computer program) in assessing the unidimensionality of test data produced by polytomous items through Monte Carlo simulation. Results show that PD can confirm unidimensionality for unidimensional simulated data and can detect lack of unidimensionality. (SLD)
Descriptors: Evaluation Methods, Item Response Theory, Monte Carlo Methods, Simulation
Peer reviewed Peer reviewed
Raykov, Tenko – Applied Psychological Measurement, 1998
Examines the relationship between Cronbach's coefficient alpha and the reliability of a composite of a prespecified set of interrelated nonhomogeneous components through simulation. Shows that alpha can over- or underestimate scale reliability at the population level. Illustrates the bias in terms of structural parameters. (SLD)
Descriptors: Reliability, Simulation, Statistical Bias, Structural Equation Models
Pages: 1  |  ...  |  337  |  338  |  339  |  340  |  341  |  342  |  343  |  344  |  345  |  ...  |  637