Publication Date
| In 2026 | 0 |
| Since 2025 | 1 |
| Since 2022 (last 5 years) | 1 |
| Since 2017 (last 10 years) | 1 |
| Since 2007 (last 20 years) | 7 |
Descriptor
| Measurement | 7 |
| Sampling | 7 |
| Statistical Inference | 7 |
| Computation | 4 |
| Achievement Tests | 3 |
| Evaluation Methods | 3 |
| Data Analysis | 2 |
| Error of Measurement | 2 |
| Foreign Countries | 2 |
| International Assessment | 2 |
| Intervals | 2 |
| More ▼ | |
Source
| Applied Psychological… | 2 |
| Applied Measurement in… | 1 |
| Journal of Creative Behavior | 1 |
| Journal of Educational and… | 1 |
| Large-scale Assessments in… | 1 |
| National Center for Education… | 1 |
Author
| Azen, Razia | 1 |
| David Rutkowski | 1 |
| Divers, Jasmin | 1 |
| Gu, Fei | 1 |
| Haertel, Edward H. | 1 |
| Hoyle, Larry | 1 |
| Jaciw, Andrew P. | 1 |
| Kaplan, David | 1 |
| Kingston, Neal M. | 1 |
| Leslie Rutkowski | 1 |
| McCarty, Alyn Turner | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 6 |
| Reports - Research | 6 |
| Numerical/Quantitative Data | 1 |
| Reports - Evaluative | 1 |
Education Level
| Secondary Education | 3 |
| Elementary Secondary Education | 1 |
| Grade 8 | 1 |
| Junior High Schools | 1 |
| Middle Schools | 1 |
Audience
Location
| Arizona | 1 |
| California | 1 |
| Iceland | 1 |
| Missouri | 1 |
Laws, Policies, & Programs
Assessments and Surveys
| Program for International… | 2 |
What Works Clearinghouse Rating
Leslie Rutkowski; David Rutkowski – Journal of Creative Behavior, 2025
The Programme for International Student Assessment (PISA) introduced creative thinking as an innovative domain in 2022. This paper examines the unique methodological issues in international assessments and the implications of measuring creative thinking within PISA's framework, including stratified sampling, rotated form designs, and a distinct…
Descriptors: Creativity, Creative Thinking, Measurement, Sampling
Michaelides, Michalis P.; Haertel, Edward H. – Applied Measurement in Education, 2014
The standard error of equating quantifies the variability in the estimation of an equating function. Because common items for deriving equated scores are treated as fixed, the only source of variability typically considered arises from the estimation of common-item parameters from responses of samples of examinees. Use of alternative, equally…
Descriptors: Equated Scores, Test Items, Sampling, Statistical Inference
Padilla, Miguel A.; Divers, Jasmin; Newton, Matthew – Applied Psychological Measurement, 2012
Three different bootstrap methods for estimating confidence intervals (CIs) for coefficient alpha were investigated. In addition, the bootstrap methods were compared with the most promising coefficient alpha CI estimation methods reported in the literature. The CI methods were assessed through a Monte Carlo simulation utilizing conditions…
Descriptors: Intervals, Monte Carlo Methods, Computation, Sampling
Kaplan, David; McCarty, Alyn Turner – Large-scale Assessments in Education, 2013
Background: In the context of international large scale assessments, it is often not feasible to implement a complete survey of all relevant populations. For example, the OECD Program for International Student Assessment surveys both students and schools, but does not obtain information from teachers. In contrast the OECD Teaching and Learning…
Descriptors: Measurement, International Assessment, Student Surveys, Teacher Surveys
Gu, Fei; Skorupski, William P.; Hoyle, Larry; Kingston, Neal M. – Applied Psychological Measurement, 2011
Ramsay-curve item response theory (RC-IRT) is a nonparametric procedure that estimates the latent trait using splines, and no distributional assumption about the latent trait is required. For item parameters of the two-parameter logistic (2-PL), three-parameter logistic (3-PL), and polytomous IRT models, RC-IRT can provide more accurate estimates…
Descriptors: Intervals, Item Response Theory, Models, Evaluation Methods
Olsen, Robert B.; Unlu, Fatih; Price, Cristofer; Jaciw, Andrew P. – National Center for Education Evaluation and Regional Assistance, 2011
This report examines the differences in impact estimates and standard errors that arise when these are derived using state achievement tests only (as pre-tests and post-tests), study-administered tests only, or some combination of state- and study-administered tests. State tests may yield different evaluation results relative to a test that is…
Descriptors: Achievement Tests, Standardized Tests, State Standards, Reading Achievement
Azen, Razia; Traxel, Nicole – Journal of Educational and Behavioral Statistics, 2009
This article proposes an extension of dominance analysis that allows researchers to determine the relative importance of predictors in logistic regression models. Criteria for choosing logistic regression R[superscript 2] analogues were determined and measures were selected that can be used to perform dominance analysis in logistic regression. A…
Descriptors: Regression (Statistics), Predictor Variables, Measurement, Simulation

Peer reviewed
Direct link
