NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1,321 to 1,335 of 3,711 results Save | Export
Peer reviewed Peer reviewed
Chernin, Jeffrey; Holden, Janice Miner; Chandler, Cynthia – Measurement and Evaluation in Counseling and Development, 1997
Explores heterosexist bias in seven widely used assessment instruments. Focuses on bias that is observable in the instruments themselves and in the ancillary materials. Describes three types of bias, how these biases manifest in various instruments, and makes recommendations for mental health practitioners and for professionals who develop…
Descriptors: Evaluation Problems, Homophobia, Homosexuality, Lesbianism
Peer reviewed Peer reviewed
Wohlgemuth, Elaine A. – Measurement and Evaluation in Counseling and Development, 1997
Comments on an article that explored heterosexist bias in seven widely used assessment instruments. Raises concerns about how the article defined homophobia and heterosexism, and advocates a cautious approach to decreasing heterosexist and homophobic bias in assessment, which includes gathering assessment data. (RJM)
Descriptors: Evaluation Problems, Homophobia, Homosexuality, Lesbianism
Young, Jeffrey R. – Chronicle of Higher Education, 2003
Discusses why a researcher is saying that designers of the college-entrance examination, in trying to create a reliable test, may be guaranteeing bias. (EV)
Descriptors: College Entrance Examinations, Higher Education, Racial Bias, Test Bias
Peer reviewed Peer reviewed
Gordon, Robert A.; And Others – Journal of Vocational Behavior, 1988
Asserts that fragmentation of academic disciplines handicaps efforts to deal rationally with problems arising from group differences in general intelligence. Contends that open discussions like those appearing in this special journal issue are necessary, and illustrates arguments through comments on moral, scientific, and legal concerns addressed…
Descriptors: Employment Practices, Intelligence, Occupational Tests, Racial Differences
Peer reviewed Peer reviewed
Stuart-Hamilton, Ian – Educational Gerontology, 1999
Attitudes were assessed after 89 undergraduates were asked either five neutral questions, five questions on the economic welfare of older people, or five on elders' physical frailty. Economic questions resulted in significantly more negative views of the mental aspects of aging, suggesting that questionnaires may contain tacit sources of bias. (SK)
Descriptors: Age Discrimination, Aging (Individuals), Attitudes, Test Bias
Peer reviewed Peer reviewed
Sturmer, Paul J.; Gerstein, Lawrence H. – Journal of Mental Health Counseling, 1997
Reviews literature on Minnesota Multiphasic Personality Inventory (MMPI) racial bias since 1977. Research indicates cases in which racial bias is present and cases where it is not. Anticipates the onslaught of research on racial bias with the MMPI-2 and provides recommendations to researchers and mental-health counselors. (RJM)
Descriptors: Blacks, Racial Bias, Research Needs, Test Bias
Peer reviewed Peer reviewed
Kwate, Naa Oyo A. – Journal of Black Psychology, 2001
Examines the Eurocentric basis of the Wechsler Intelligence Scale for Children--Third Edition (WISC-III) and reveals its antagonistic and incompatible relationship to an Africentric conception of intellectual and mental health. Suggests that the WISC-III provides a measure of misorientation quotient rather than intelligence quotient, and notes…
Descriptors: Afrocentrism, Black Youth, Intelligence Tests, Racial Bias
Peer reviewed Peer reviewed
Hong, Sehee; Roznowski, Mary – Applied Measurement in Education, 2001
Studied the relationship between internal test bias and regression slope differences. Monte Carlo simulation results indicate a strong relationship between internal test bias and slope differences, but this relationship does not imply that an absence of internal test bias leads to slope invariance or that slope differences imply internal test…
Descriptors: Item Bias, Monte Carlo Methods, Regression (Statistics), Test Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Winman, Anders; Hansson, Patrik; Juslin, Peter – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2004
Format dependence implies that assessment of the same subjective probability distribution produces different conclusions about over- or underconfidence depending on the assessment format. In 2 experiments, the authors demonstrate that the overconfidence bias that occurs when participants produce intervals for an uncertain quantity is almost…
Descriptors: Probability, Intervals, Sampling, Psychological Studies
Peer reviewed Peer reviewed
Direct linkDirect link
Hogarty, Kristine Y.; Kromrey, Jeffrey D.; Ferron, John M.; Hines, Constance V. – Psychometrika, 2004
The purpose of this study was to investigate and compare the performance of a stepwise variable selection algorithm to traditional exploratory factor analysis. The Monte Carlo study included six factors in the design; the number of common factors; the number of variables explained by the common factors; the magnitude of factor loadings; the number…
Descriptors: Factor Analysis, Comparative Analysis, Test Bias, Monte Carlo Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Vaughn, Brandon K. – Journal on School Educational Technology, 2008
This study considers the importance of contextual effects on the quality of assessments on item bias and differential item functioning (DIF) in measurement. Often, in educational studies, students are clustered in teachers or schools, and the clusters could impact psychometric issues yet are largely ignored by traditional item analyses. A…
Descriptors: Test Bias, Educational Assessment, Educational Quality, Context Effect
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Thelk, Amy – Research & Practice in Assessment, 2008
Differential Item Functioning (DIF) occurs when there is a greater probability of solving an item based on group membership after controlling for ability. Following administration of a 50-item scientific and quantitative reasoning exam to 286 two-year and 1174 four-year students, items were evaluated for DIF. Two-year students performed…
Descriptors: Test Bias, Probability, Test Items, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Coe, Robert – Oxford Review of Education, 2008
The comparability of examinations in different subjects has been a controversial topic for many years and a number of criticisms have been made of statistical approaches to estimating the "difficulties" of achieving particular grades in different subjects. This paper argues that if comparability is understood in terms of a linking…
Descriptors: Test Items, Grades (Scholastic), Foreign Countries, Test Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Ferrara, Steve – Educational Assessment, 2008
The No Child Left Behind Act of 2001 requires all states to assess the English proficiency of English language learners each school year. Under Title I and Title III of No Child Left Behind, states are required to measure the annual growth of students' English language development in reading, listening, writing, and speaking and in comprehension…
Descriptors: Speech Communication, Federal Legislation, Second Language Learning, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Sireci, Stephen G.; Han, Kyung T.; Wells, Craig S. – Educational Assessment, 2008
In the United States, when English language learners (ELLs) are tested, they are usually tested in English and their limited English proficiency is a potential cause of construct-irrelevant variance. When such irrelevancies affect test scores, inaccurate interpretations of ELLs' knowledge, skills, and abilities may occur. In this article, we…
Descriptors: Test Use, Educational Assessment, Psychological Testing, Validity
Pages: 1  |  ...  |  85  |  86  |  87  |  88  |  89  |  90  |  91  |  92  |  93  |  ...  |  248