NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1,291 to 1,305 of 3,711 results Save | Export
Peer reviewed Peer reviewed
Frame, Roger E.; And Others – Journal of Learning Disabilities, 1984
In a simulated learning disability care program presented to 24 experienced school psychologists, only 17 of the 744 possible diagnostic main effects and two-way interactions which might indicate bias were found to be significant, with seven differences being expected by chance. No statements about intelligence, classroom behavior, or social…
Descriptors: Clinical Diagnosis, Disabilities, Learning, School Psychologists
Peer reviewed Peer reviewed
Cicchetti, Domenic V. – Educational and Psychological Measurement, 1976
A computer program which computes both the interjudge reliability of individual measurements and the extent to which the judges are biased in their ratings vis-a-vis each other is presented. The methods proposed are recommended on the basis of recent developments in statistical research. (Author/JKS)
Descriptors: Computer Programs, Individual Testing, Test Bias, Test Reliability
Bach, Zellig – NJEA Review, 1971
Descriptors: Cultural Differences, Intelligence Tests, Test Bias, Test Reliability
LaValle, Kenneth P. – Today's Education, 1980
The problems of standardized testing are considered, and the efforts of New York State legislators to change current practices in test usage are recounted. (LH)
Descriptors: Equated Scores, Student Rights, Test Bias, Testing Problems
Peer reviewed Peer reviewed
Cicchetti, Domenic V.; And Others – Educational and Psychological Measurement, 1977
Computer programs are described which compute rater agreement and rater bias statistics with qualitative data. They also utilize techniques for selecting the most reliable rater from a set of raters and identifying those cases which are most difficult for raters to classify. (Author/JKS)
Descriptors: Computer Programs, Measurement Techniques, Rating Scales, Test Bias
Peer reviewed Peer reviewed
Hills, John R. – Educational Measurement: Issues and Practice, 1989
Test bias detection methods based on item response theory (IRT) are reviewed. Five such methods are commonly used: (1) equality of item parameters; (2) area between item characteristic curves; (3) sums of squares; (4) pseudo-IRT; and (5) one-parameter-IRT. A table compares these and six newer or less tested methods. (SLD)
Descriptors: Item Analysis, Test Bias, Test Items, Testing Programs
Peer reviewed Peer reviewed
Gandy, Gerald L. – Psychology: A Journal of Human Behavior, 1988
Presents a recent history of the public controversy concerning academic aptitude/intelligence tests. Offers suggestions about test technology and research that may influence public interest groups to develop a better perspective. Urges better cooperation between professional and public interest groups. (Author/ABL)
Descriptors: Aptitude Tests, Intelligence Tests, Test Bias, Test Validity
Peer reviewed Peer reviewed
Cole, Jack; And Others – Adult Learning, 1993
Highlights test biases that can hamper the performance of adult learners. The biases can originate from the content of the questions or from special situations related to motivation, competition, membership in specific groups, or instructional methods. (JOW)
Descriptors: Adult Education, Literacy Education, Test Bias, Test Items
Peer reviewed Peer reviewed
Flowers, Claudia P.; Oshima, T. C.; Raju, Nambury S. – Applied Psychological Measurement, 1999
Examined the polytomous differential functioning of items and tests (DFIT) framework proposed by N. Raju and others through simulation. Findings show that the DFIT framework is effective in identifying differential item functioning and differential test functioning. (SLD)
Descriptors: Identification, Item Bias, Models, Test Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Berge, Jos M. F. Ten; Socan, Gregor – Psychometrika, 2004
To assess the reliability of congeneric tests, specifically designed reliability measures have been proposed. This paper emphasizes that such measures rely on a unidimensionality hypothesis, which can neither be confirmed nor rejected when there are only three test parts, and will invariably be rejected when there are more than three test parts.…
Descriptors: Test Reliability, Sampling, Psychometrics, Test Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Penfield, Randall D.; Algina, James – Journal of Educational Measurement, 2006
One approach to measuring unsigned differential test functioning is to estimate the variance of the differential item functioning (DIF) effect across the items of the test. This article proposes two estimators of the DIF effect variance for tests containing dichotomous and polytomous items. The proposed estimators are direct extensions of the…
Descriptors: Test Bias, Test Format, Test Items, Simulation
Rajagopal, Kadhir – ASCD, 2011
Inspired by his ability to teach algebra to low-income and mostly African American and Latino urban students--and have them outscore the state averages for high-income and Caucasian students on standardized tests--Kadhir "Raja" Rajagopal, the 2011 California Teacher of the Year, provides you with a model for teaching that unleashes the…
Descriptors: Assignments, Urban Schools, Standardized Tests, Cooperative Learning
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Puhan, Gautam; Moses, Tim P.; Yu, Lei; Dorans, Neil J. – ETS Research Report Series, 2007
The purpose of the current study was to examine whether log-linear smoothing of observed score distributions in small samples results in more accurate differential item functioning (DIF) estimates under the simultaneous item bias test (SIBTEST) framework. Data from a teacher certification test were analyzed using White candidates in the reference…
Descriptors: Test Bias, Computation, Sample Size, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Evans, Sion Wyn – Educational Studies in Mathematics, 2007
This paper draws on data from the development of annual national mathematics assessment materials for 7-year-old pupils in Wales for use during the period 2000-2002. The materials were developed in both English and Welsh and were designed to be matched. The paper reports on item analyses which sought items that exhibited differential performance…
Descriptors: Foreign Countries, Welsh, Test Bias, Educational Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Ferne, Tracy; Rupp, Andre A. – Language Assessment Quarterly, 2007
This article reviews research on differential item functioning (DIF) in language testing conducted primarily between 1990 and 2005 with an eye toward providing methodological guidelines for developing, conducting, and disseminating research in this area. The article contains a synthesis of 27 studies with respect to five essential sets of…
Descriptors: Test Bias, Evaluation Research, Testing, Language Tests
Pages: 1  |  ...  |  83  |  84  |  85  |  86  |  87  |  88  |  89  |  90  |  91  |  ...  |  248