NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers3
What Works Clearinghouse Rating
Showing 1 to 15 of 41 results Save | Export
Patrick, Megan E.; Terry-McElrath, Yvonne M.; Berglund, Patricia; Pang, Yuk C.; Heeringa, Steven G.; Si, Yajuan – Institute for Social Research, 2023
The Monitoring the Future (MTF) study monitors historical and developmental changes in substance use prevalence among key subgroups of the general U.S. adolescent and adult population. The current study first devised and evaluated a cohort-specific pooled analysis weighing procedure for the MTF panel study that weighted back to the initial 12th…
Descriptors: Substance Abuse, Incidence, Adolescents, Adults
Peer reviewed Peer reviewed
Direct linkDirect link
Castellano, Katherine E.; McCaffrey, Daniel F.; Lockwood, J. R. – Journal of Educational Measurement, 2023
The simple average of student growth scores is often used in accountability systems, but it can be problematic for decision making. When computed using a small/moderate number of students, it can be sensitive to the sample, resulting in inaccurate representations of growth of the students, low year-to-year stability, and inequities for…
Descriptors: Academic Achievement, Accountability, Decision Making, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Ting Zhang; Paul Bailey; Yuqi Liao; Emmanuel Sikali – Large-scale Assessments in Education, 2024
The EdSurvey package helps users download, explore variables in, extract data from, and run analyses on large-scale assessment data. The analysis functions in EdSurvey account for the use of plausible values for test scores, survey sampling weights, and their associated variance estimator. We describe the capabilities of the package in the context…
Descriptors: National Competency Tests, Information Retrieval, Data Collection, Test Validity
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kim, Sooyeon; Walker, Michael E. – ETS Research Report Series, 2021
Equating the scores from different forms of a test requires collecting data that link the forms. Problems arise when the test forms to be linked are given to groups that are not equivalent and the forms share no common items by which to measure or adjust for this group nonequivalence. We compared three approaches to adjusting for group…
Descriptors: Equated Scores, Weighted Scores, Sampling, Multiple Choice Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Shen, Ting; Konstantopoulos, Spyros – Practical Assessment, Research & Evaluation, 2022
Large-scale assessment survey (LSAS) data are collected via complex sampling designs with special features (e.g., clustering and unequal probability of selection). Multilevel models have been utilized to account for clustering effects whereas the probability weighting approach (PWA) has been used to deal with design informativeness derived from…
Descriptors: Sampling, Weighted Scores, Hierarchical Linear Modeling, Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Shelby J. Haberman; Sabine Meinck; Ann-Kristin Koop – Large-scale Assessments in Education, 2024
This paper extends existing work on teacher weighting in student-centered surveys by looking into aspects of practical implementation of deriving and using weights for teacher-centered analysis in the Trends in International Mathematics and Science Study (TIMSS) and the Progress in International Reading Literacy Study (PIRLS). The formal…
Descriptors: Elementary Secondary Education, Foreign Countries, Achievement Tests, Mathematics Achievement
Peer reviewed Peer reviewed
Direct linkDirect link
Cooney, Jennifer; Siegel, Peter – New Directions for Institutional Research, 2019
In institution research, surveys of students or faculty can be a helpful tool to gather data. Surveying a sample of students or faculty and computing weights to be able to make inferences to your student or faculty population are important. In this chapter, we introduce the connected topics of sampling and weighting. We begin with a discussion on…
Descriptors: Sampling, Student Surveys, Teacher Surveys, Weighted Scores
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Burns, Erin; Zolnik, Edmund; Yanez, Christina; Mann, Rebecca – National Center for Education Statistics, 2022
The NCVS is the nation's primary source of information on the nature of criminal victimization. The NCVS collects data each year from a nationally representative sample of households on the frequency, characteristics, and consequences of criminal victimization in the United States. Currently, the NCVS includes four supplemental surveys that are…
Descriptors: National Surveys, Bullying, Crime, Victims of Crime
Peer reviewed Peer reviewed
Direct linkDirect link
Goodman, Joshua T.; Dallas, Andrew D.; Fan, Fen – Applied Measurement in Education, 2020
Recent research has suggested that re-setting the standard for each administration of a small sample examination, in addition to the high cost, does not adequately maintain similar performance expectations year after year. Small-sample equating methods have shown promise with samples between 20 and 30. For groups that have fewer than 20 students,…
Descriptors: Equated Scores, Sample Size, Sampling, Weighted Scores
Christine G. Casey, Editor – Centers for Disease Control and Prevention, 2024
The "Morbidity and Mortality Weekly Report" ("MMWR") series of publications is published by the Office of Science, Centers for Disease Control and Prevention (CDC), U.S. Department of Health and Human Services. Articles included in this supplement are: (1) Overview and Methods for the Youth Risk Behavior Surveillance System --…
Descriptors: High School Students, At Risk Students, Health Behavior, National Surveys
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lu, Ru; Guo, Hongwen; Dorans, Neil J. – ETS Research Report Series, 2021
Two families of analysis methods can be used for differential item functioning (DIF) analysis. One family is DIF analysis based on observed scores, such as the Mantel-Haenszel (MH) and the standardized proportion-correct metric for DIF procedures; the other is analysis based on latent ability, in which the statistic is a measure of departure from…
Descriptors: Robustness (Statistics), Weighted Scores, Test Items, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Beath, Ken J. – Research Synthesis Methods, 2014
When performing a meta-analysis unexplained variation above that predicted by within study variation is usually modeled by a random effect. However, in some cases, this is not sufficient to explain all the variation because of outlier or unusual studies. A previously described method is to define an outlier as a study requiring a higher random…
Descriptors: Mixed Methods Research, Robustness (Statistics), Meta Analysis, Prediction
Wagemaker, Hans, Ed. – International Association for the Evaluation of Educational Achievement, 2020
Although International Association for the Evaluation of Educational Achievement-pioneered international large-scale assessment (ILSA) of education is now a well-established science, non-practitioners and many users often substantially misunderstand how large-scale assessments are conducted, what questions and challenges they are designed to…
Descriptors: International Assessment, Achievement Tests, Educational Assessment, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Freitas, Pedro; Nunes, Luís Catela; Balcão Reis, Ana; Seabra, Carmo; Ferro, Adriana – Assessment in Education: Principles, Policy & Practice, 2016
The results of large-scale international assessments such as Programme for International Student Assessment (PISA) have attracted a considerable attention worldwide and are often used by policy-makers to support educational policies. To ensure that the published results represent the actual population, these surveys go through a thorough scrutiny…
Descriptors: International Assessment, Student Characteristics, Weighted Scores, Evaluation Problems
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Qian, Jiahe; Jiang, Yanming; von Davier, Alina A. – ETS Research Report Series, 2013
Several factors could cause variability in item response theory (IRT) linking and equating procedures, such as the variability across examinee samples and/or test items, seasonality, regional differences, native language diversity, gender, and other demographic variables. Hence, the following question arises: Is it possible to select optimal…
Descriptors: Item Response Theory, Test Items, Sampling, True Scores
Previous Page | Next Page »
Pages: 1  |  2  |  3