NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 5,401 to 5,415 of 9,547 results Save | Export
Peer reviewed Peer reviewed
Chang, Lei – Educational and Psychological Measurement, 1995
Items previously described as "negatively worded" are redefined as "connotatively inconsistent" because this term has a broader base for generalization. Using generalizability theory with a sample of 102 graduate students, the study showed that connotatively consistent and reversed connotatively inconsistent items were not…
Descriptors: Generalizability Theory, Graduate Students, Graduate Study, Likert Scales
Peer reviewed Peer reviewed
Chang, Lei – Applied Measurement in Education, 1995
A test item is defined as connotatively consistent (CC) or connotatively inconsistent (CI) when its connotation agrees with or contradicts that of the majority of items on a test. CC and CI items were examined in the Life Orientation Test and were shown to measure correlated but distinct traits. (SLD)
Descriptors: Attitude Measures, College Students, Higher Education, Personality Measures
Peer reviewed Peer reviewed
Hsu, Louis M. – Multivariate Behavioral Research, 1992
D.V. Budescu and J.L. Rogers (1981) proposed a method of adjusting correlations of scales to eliminate spurious components resulting from the overlapping of scales. Three reliability correction formulas are derived in this article that are based on more tenable assumptions. (SLD)
Descriptors: Correlation, Equations (Mathematics), Mathematical Models, Personality Measures
Peer reviewed Peer reviewed
Lam, Tony C. M.; Stevens, Joseph J. – Applied Measurement in Education, 1994
Effects of the following three variables on rating scale response were studied: (1) polarization of opinion regarding scale content; (2) intensity of item wording; and (3) psychological width of the scale. Results with 167 college students suggest best ways to balance polarization and item wording regardless of scale width. (SLD)
Descriptors: College Students, Content Analysis, Higher Education, Rating Scales
Peer reviewed Peer reviewed
Klieme, Eckhard; Stumpf, Heinrich – Educational and Psychological Measurement, 1991
A FORTRAN 77 computer program is presented to perform analyses of differential item performance in psychometric tests. The program performs the Mantel-Haenszel procedure and computes additional classical indices of differential item functioning (DIF) and associated effect size measures. (Author/SLD)
Descriptors: Chi Square, Computer Software, Effect Size, Estimation (Mathematics)
Peer reviewed Peer reviewed
Albanese, Mark A. – Educational Measurement: Issues and Practice, 1993
A comprehensive review is given of evidence, with a bearing on the recommendation to avoid use of complex multiple choice (CMC) items. Avoiding Type K items (four primary responses and five secondary choices) seems warranted, but evidence against CMC in general is less clear. (SLD)
Descriptors: Cues, Difficulty Level, Multiple Choice Tests, Responses
Peer reviewed Peer reviewed
Melnick, Steven A.; Gable, Robert K. – Educational Research Quarterly, 1990
By administering an attitude survey to 3,328 parents of elementary school students, use of positive and negative Likert item stems was analyzed. Respondents who answered positive/negative item pairs that were parallel in meaning consistently were compared with those who answered inconsistently. Implications for construction of affective measures…
Descriptors: Affective Measures, Comparative Testing, Elementary Education, Likert Scales
Peer reviewed Peer reviewed
Jolly, Brian; And Others – Teaching and Learning in Medicine, 1993
A University of Adelaide (Australia) study investigated the effect of administering identical stations to different classes over a 12-year period within the objective structured clinical examination component of a final-year medical school examination. Repeat administrations correlated with improved student performance over time. (Author/MSE)
Descriptors: Clinical Experience, Higher Education, Medical Education, Professional Education
Peer reviewed Peer reviewed
Fan, Xitao – Educational and Psychological Measurement, 1998
This study empirically examined the behaviors of item and person statistics derived from item response theory and classical test theory, focusing on item and person statistics and using a large-scale statewide assessment. Findings show that the person and item statistics from the two measurement frameworks are quite comparable. (SLD)
Descriptors: Item Response Theory, State Programs, Statistical Analysis, Test Items
Peer reviewed Peer reviewed
van der Linden, Wim J. – Psychometrika, 1998
This paper suggests several item selection criteria for adaptive testing that are all based on the use of the true posterior. Some of the ability estimators produced by these criteria are discussed and empirically criticized. (SLD)
Descriptors: Ability, Adaptive Testing, Bayesian Statistics, Computer Assisted Testing
Peer reviewed Peer reviewed
Livne, Nava L.; Livne, Oren E.; Milgram, Roberta M. – International Journal of Mathematical Education in Science and Technology, 1999
Develops a mapping sentence to construct test items measuring academic and creative abilities in mathematics at four levels. Describes the three stages of the process of developing the mapping sentence and presents examples of test items representing each ability/level combination. Contains 63 references. (Author/ASK)
Descriptors: Ability Identification, Academic Ability, Creativity, Mathematics Education
Peer reviewed Peer reviewed
Sireci, Stephen G. – Educational Assessment, 1998
Describes content-validity theory and illustrates new and traditional approaches for conducting content-validity studies. Newer approaches are based on multidimensional scaling analysis of item-similarity ratings, while traditional approaches are based on ratings of item-objective congruence and relevance. (Author/SLD)
Descriptors: Content Validity, Data Analysis, Evaluation Methods, Multidimensional Scaling
Peer reviewed Peer reviewed
Vispoel, Walter P. – Journal of Educational Measurement, 1998
Compared results from computer-adaptive and self-adaptive tests under conditions in which item review was and was not permitted for 379 college students. Results suggest that, when given the opportunity, most examinees will change answers, but usually only to a small portion of items, resulting in some benefit to the test taker. (SLD)
Descriptors: Adaptive Testing, College Students, Computer Assisted Testing, Higher Education
Peer reviewed Peer reviewed
Patz, Richard J.; Junker, Brian W. – Journal of Educational and Behavioral Statistics, 1999
Extends the basic Markov chain Monte Carlo (MCMC) strategy of R. Patz and B. Junker (1999) for Bayesian inference in complex Item Response Theory settings to address issues such as nonresponse, designed missingness, multiple raters, guessing behaviors, and partial credit (polytomous) test items. Applies the MCMC method to data from the National…
Descriptors: Bayesian Statistics, Item Response Theory, Markov Processes, Monte Carlo Methods
Peer reviewed Peer reviewed
Hirschfeld, Robert R. – Educational and Psychological Measurement, 2000
Compared original intrinsic and extrinsic subscales of the Minnesota Satisfaction Questionnnaire short form to revised subscales using data from samples of 99 employees and 250 graduate and undergraduate students. Analyses from both samples indicate that revising the intrinsic and extrinsic subscales made little difference to the results obtained.…
Descriptors: College Students, Employees, Higher Education, Measurement Techniques
Pages: 1  |  ...  |  357  |  358  |  359  |  360  |  361  |  362  |  363  |  364  |  365  |  ...  |  637