Publication Date
In 2025 | 1 |
Since 2024 | 3 |
Since 2021 (last 5 years) | 4 |
Since 2016 (last 10 years) | 8 |
Since 2006 (last 20 years) | 25 |
Descriptor
Source
Author
Lee, Sik-Yum | 2 |
Raykov, Tenko | 2 |
Song, Xin-Yuan | 2 |
Adelson, Jill L. | 1 |
Alper, Paul | 1 |
Angela Johnson | 1 |
Aylesworth, Richard | 1 |
Baldwin, Scott A. | 1 |
Bardhoshi, Gerta | 1 |
Barr, James | 1 |
Birenbaum, Menucha | 1 |
More ▼ |
Publication Type
Reports - Descriptive | 51 |
Journal Articles | 42 |
Speeches/Meeting Papers | 3 |
Information Analyses | 1 |
Opinion Papers | 1 |
Education Level
Higher Education | 9 |
Elementary Secondary Education | 5 |
Postsecondary Education | 4 |
Adult Education | 2 |
High Schools | 1 |
Two Year Colleges | 1 |
Audience
Practitioners | 1 |
Researchers | 1 |
Location
New York | 2 |
North America | 2 |
Africa | 1 |
Asia | 1 |
China (Beijing) | 1 |
Europe | 1 |
Germany | 1 |
Senegal (Dakar) | 1 |
Taiwan | 1 |
Tennessee | 1 |
United States | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
Schools and Staffing Survey… | 1 |
Trends in International… | 1 |
What Works Clearinghouse Rating
Susan K. Johnsen – Gifted Child Today, 2025
The author provides information about reliability and areas that educators should examine in determining if an assessment is consistent and trustworthy for use, and how it should be interpreted in making decisions about students. Reliability areas that are discussed in the column include internal consistency, test-retest or stability, inter-scorer…
Descriptors: Test Reliability, Academically Gifted, Student Evaluation, Error of Measurement
Johan Lyrvall; Zsuzsa Bakk; Jennifer Oser; Roberto Di Mari – Structural Equation Modeling: A Multidisciplinary Journal, 2024
We present a bias-adjusted three-step estimation approach for multilevel latent class models (LC) with covariates. The proposed approach involves (1) fitting a single-level measurement model while ignoring the multilevel structure, (2) assigning units to latent classes, and (3) fitting the multilevel model with the covariates while controlling for…
Descriptors: Hierarchical Linear Modeling, Statistical Bias, Error of Measurement, Simulation
So, Julia Wai-Yin – Assessment Update, 2023
In this article, Julia So discusses the purpose of program assessment, four common missteps of program assessment and reporting, and how to prevent them. The four common missteps of program assessment and reporting she has observed are: (1) unclear or ambiguous program goals; (2) measurement error of program goals and outcomes; (3) incorrect unit…
Descriptors: Program Evaluation, Community Colleges, Evaluation Methods, Objectives
Angela Johnson; Elizabeth Barker; Marcos Viveros Cespedes – Educational Measurement: Issues and Practice, 2024
Educators and researchers strive to build policies and practices on data and evidence, especially on academic achievement scores. When assessment scores are inaccurate for specific student populations or when scores are inappropriately used, even data-driven decisions will be misinformed. To maximize the impact of the research-practice-policy…
Descriptors: Equal Education, Inclusion, Evaluation Methods, Error of Measurement
What Works Clearinghouse, 2020
This supplement concerns Appendix E of the "What Works Clearinghouse (WWC) Procedures Handbook, Version 4.1." The supplement extends the range of designs and analyses that can generate effect size and standard error estimates for the WWC. This supplement presents several new standard error formulas for cluster-level assignment studies,…
Descriptors: Educational Research, Evaluation Methods, Effect Size, Research Design
Yang, Shitao; Black, Ken – Teaching Statistics: An International Journal for Teachers, 2019
Summary Employing a Wald confidence interval to test hypotheses about population proportions could lead to an increase in Type I or Type II errors unless the hypothesized value, p0, is used in computing its standard error rather than the sample proportion. Whereas the Wald confidence interval to estimate a population proportion uses the sample…
Descriptors: Error Patterns, Evaluation Methods, Error of Measurement, Measurement Techniques
Raykov, Tenko; Marcoulides, George A. – Educational and Psychological Measurement, 2018
This article outlines a procedure for examining the degree to which a common factor may be dominating additional factors in a multicomponent measuring instrument consisting of binary items. The procedure rests on an application of the latent variable modeling methodology and accounts for the discrete nature of the manifest indicators. The method…
Descriptors: Measurement Techniques, Factor Analysis, Item Response Theory, Likert Scales
Bardhoshi, Gerta; Erford, Bradley T. – Measurement and Evaluation in Counseling and Development, 2017
Precision is a key facet of test development, with score reliability determined primarily according to the types of error one wants to approximate and demonstrate. This article identifies and discusses several primary forms of reliability estimation: internal consistency (i.e., split-half, KR-20, a), test-retest, alternate forms, interscorer, and…
Descriptors: Scores, Test Reliability, Accuracy, Pretests Posttests
Alper, Paul – Higher Education Review, 2014
In 1916 Robert Frost published his famous poem, "The Road Not Taken," in which he muses about what might have been had he chosen a different path, made a different choice. While counterfactual arguments in general can often lead to vacuous nowheres, frequently in statistics the data that are not presented actually exist, in a sense,…
Descriptors: Data Interpretation, Data Analysis, Error of Measurement, Theory Practice Relationship
Pokropek, Artur – Sociological Methods & Research, 2015
This article combines statistical and applied research perspective showing problems that might arise when measurement error in multilevel compositional effects analysis is ignored. This article focuses on data where independent variables are constructed measures. Simulation studies are conducted evaluating methods that could overcome the…
Descriptors: Error of Measurement, Hierarchical Linear Modeling, Simulation, Evaluation Methods
Lee, Chun-Ting; Zhang, Guangjian; Edwards, Michael C. – Multivariate Behavioral Research, 2012
Exploratory factor analysis (EFA) is often conducted with ordinal data (e.g., items with 5-point responses) in the social and behavioral sciences. These ordinal variables are often treated as if they were continuous in practice. An alternative strategy is to assume that a normally distributed continuous variable underlies each ordinal variable.…
Descriptors: Personality Traits, Intervals, Monte Carlo Methods, Factor Analysis
McCoach, D. Betsy; Adelson, Jill L. – Gifted Child Quarterly, 2010
This article provides a conceptual introduction to the issues surrounding the analysis of clustered (nested) data. We define the intraclass correlation coefficient (ICC) and the design effect, and we explain their effect on the standard error. When the ICC is greater than 0, then the design effect is greater than 1. In such a scenario, the…
Descriptors: Statistical Significance, Error of Measurement, Correlation, Data Analysis
Rosch, David M.; Schwartz, Leslie M. – Journal of Leadership Education, 2009
As more institutions of higher education engage in the practice of leadership education, the effective assessment of these efforts lags behind due to a variety of factors. Without an intentional assessment plan, leadership educators are liable to make one or more of several common errors in assessing their programs and activities. This article…
Descriptors: Leadership Training, Administrator Education, College Outcomes Assessment, Program Evaluation
Wu, Margaret – Educational Measurement: Issues and Practice, 2010
In large-scale assessments, such as state-wide testing programs, national sample-based assessments, and international comparative studies, there are many steps involved in the measurement and reporting of student achievement. There are always sources of inaccuracies in each of the steps. It is of interest to identify the source and magnitude of…
Descriptors: Testing Programs, Educational Assessment, Measures (Individuals), Program Effectiveness
Birenbaum, Menucha – Studies in Educational Evaluation, 2007
High quality assessment practice is expected to yield valid and useful score-based interpretations about what the examinees know and are able to do with respect to a defined target domain. Given this assertion, the article presents a framework based on the "unified view of validity," advanced by Cronbach and Messick over two decades ago, to assist…
Descriptors: Quality Control, Student Evaluation, Validity, Evaluation Methods