NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 961 to 975 of 3,711 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Wu, Johnny; King, Kevin M.; Witkiewitz, Katie; Racz, Sarah Jensen; McMahon, Robert J. – Psychological Assessment, 2012
Research has shown that boys display higher levels of childhood conduct problems than girls, and Black children display higher levels than White children, but few studies have tested for scalar equivalence of conduct problems across gender and race. The authors conducted a 2-parameter item response theory (IRT) model to examine item…
Descriptors: Item Analysis, Test Bias, Test Items, Item Response Theory
Chiu, Ting-Wei – ProQuest LLC, 2010
Guessing behavior is an important topic with regard to assessing proficiency on multiple choice tests, particularly for examinees at lower levels of proficiency due to greater the potential for systematic error or bias which that inflates observed test scores. Methods that incorporate a correction for guessing on high-stakes tests generally rely…
Descriptors: Guessing (Tests), Item Response Theory, Multiple Choice Tests, Regression (Statistics)
Huang, Xiaoting – ProQuest LLC, 2010
In recent decades, the use of large-scale standardized international assessments has increased drastically as a way to evaluate and compare the quality of education across countries. In order to make valid international comparisons, the primary requirement is to ensure the measurement equivalence between the different language versions of these…
Descriptors: Test Bias, Comparative Testing, Foreign Countries, Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Frederickx, Sofie; Tuerlinckx, Francis; De Boeck, Paul; Magis, David – Journal of Educational Measurement, 2010
In this paper we present a new methodology for detecting differential item functioning (DIF). We introduce a DIF model, called the random item mixture (RIM), that is based on a Rasch model with random item difficulties (besides the common random person abilities). In addition, a mixture model is assumed for the item difficulties such that the…
Descriptors: Test Bias, Models, Test Items, Difficulty Level
Peer reviewed Peer reviewed
Direct linkDirect link
Maij-de Meij, Annette M.; Kelderman, Henk; van der Flier, Henk – Multivariate Behavioral Research, 2010
Usually, methods for detection of differential item functioning (DIF) compare the functioning of items across manifest groups. However, the manifest groups with respect to which the items function differentially may not necessarily coincide with the true source of the bias. It is expected that DIF detection under a model that includes a latent DIF…
Descriptors: Test Bias, Item Response Theory, Models, Aptitude Tests
Shaw, Emily J. – College Board, 2015
This primer should provide the reader with a deeper understanding of the concept of test validity and will present the recent available validity evidence on the relationship between SAT® scores and important college outcomes. In addition, the content examined on the SAT will be discussed as well as the fundamental attention paid to the fairness of…
Descriptors: College Entrance Examinations, Test Validity, Scores, Outcomes of Education
New York State Education Department, 2015
This technical report provides detailed information regarding the technical, statistical, and measurement attributes of the New York State Testing Program (NYSTP) for the Grades 3-8 Common Core English Language Arts (ELA) and Mathematics 2015 Operational Tests. This report includes information about test content and test development, item (i.e.,…
Descriptors: Testing Programs, English, Language Arts, Mathematics Tests
Ngudgratoke, Sungworn – ProQuest LLC, 2009
In many educational assessment programs, the use of multiple test forms developed from the same test specification is very common because requiring different examinees to take different test forms of the same test makes it possible to maintain the security of the test. When multiple test forms are used, it is necessary to make the assessment fair…
Descriptors: Equated Scores, Test Bias, Methods, Educational Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Woods, Carol M. – Applied Psychological Measurement, 2009
Differential item functioning (DIF) occurs when an item on a test or questionnaire has different measurement properties for one group of people versus another, irrespective of mean differences on the construct. There are many methods available for DIF assessment. The present article is focused on indices of partial association. A family of average…
Descriptors: Test Bias, Measurement, Correlation, Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Roe, Cecilie; Bautz-Holter, Erik; Cieza, Alarcos – International Journal of Rehabilitation Research, 2013
Previous studies indicate that a worldwide measurement tool may be developed based on the International Classification of Functioning Disability and Health (ICF) Core Sets for chronic conditions. The aim of the present study was to explore the possibility of constructing a cross-cultural measurement of functioning for patients with low back pain…
Descriptors: Foreign Countries, Pain, Chronic Illness, Patients
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Colwell, Nicole Makas – Journal of Education and Training Studies, 2013
This paper highlights the current findings and issues regarding the role of computer-adaptive testing in test anxiety. The computer-adaptive test (CAT) proposed by one of the Common Core consortia brings these issues to the forefront. Research has long indicated that test anxiety impairs student performance. More recent research indicates that…
Descriptors: Test Anxiety, Computer Assisted Testing, Evaluation Methods, Standardized Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Hung-Yu; Wang, Wen-Chung – Educational and Psychological Measurement, 2013
Both testlet design and hierarchical latent traits are fairly common in educational and psychological measurements. This study aimed to develop a new class of higher order testlet response models that consider both local item dependence within testlets and a hierarchy of latent traits. Due to high dimensionality, the authors adopted the Bayesian…
Descriptors: Item Response Theory, Models, Bayesian Statistics, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Mucherah, Winnie; Finch, W. Holmes; Keaikitse, Setlhomo – International Journal of Testing, 2012
Understanding adolescent self-concept is of great concern for educators, mental health professionals, and parents, as research consistently demonstrates that low self-concept is related to a number of problem behaviors and poor outcomes. Thus, accurate measurements of self-concept are key, and the validity of such measurements, including the…
Descriptors: Test Bias, Mental Health Workers, Validity, Self Concept Measures
Peer reviewed Peer reviewed
Direct linkDirect link
Gomez, Laura Elisabet; Arias, Benito; Verdugo, Miguel Angel; Navas, Patricia – Social Indicators Research, 2012
The goal of this article consists of describing the calibration of an instrument to assess quality of life-related personal outcomes using Rasch analysis. The sample was composed of 3.029 recipients of social services from Catalonia (Spain) and was selected using a probabilistic polietapic sample design. Results related to unidimensionality, item…
Descriptors: Foreign Countries, Social Services, Test Bias, Quality of Life
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Jinyan – Assessing Writing, 2012
Using generalizability (G-) theory, this study examined the accuracy and validity of the writing scores assigned to secondary school ESL students in the provincial English examinations in Canada. The major research question that guided this study was: Are there any differences between the accuracy and construct validity of the analytic scores…
Descriptors: Foreign Countries, Generalizability Theory, Writing Evaluation, Writing Tests
Pages: 1  |  ...  |  61  |  62  |  63  |  64  |  65  |  66  |  67  |  68  |  69  |  ...  |  248