NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Does not meet standards1
Showing 1,321 to 1,335 of 3,316 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Liaw, Lih-Jiun; Hsieh, Ching-Lin; Hsu, Miao-Ju; Chen, Hui-Mei; Lin, Jau-Hong; Lo, Sing-Kai – International Journal of Rehabilitation Research, 2012
The aim of this study is to determine the test-retest reproducibility of the seven-item Short-Form Berg Balance Scale (SFBBS) and the five-item Short-Form Postural Assessment Scale for Stroke Patients (SFPASS) in individuals with chronic stroke. Fifty-two chronic stroke patients from two rehabilitation departments were included in the study. Both…
Descriptors: Measurement, Measures (Individuals), Correlation, Patients
Peer reviewed Peer reviewed
Direct linkDirect link
Shang, Yi – Journal of Educational Measurement, 2012
Growth models are used extensively in the context of educational accountability to evaluate student-, class-, and school-level growth. However, when error-prone test scores are used as independent variables or right-hand-side controls, the estimation of such growth models can be substantially biased. This article introduces a…
Descriptors: Error of Measurement, Statistical Analysis, Regression (Statistics), Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Katic, Alain; Ginsberg, Lawrence; Jain, Rakesh; Adeyi, Ben; Dirks, Bryan; Babcock, Thomas; Scheckner, Brian; Richards, Cynthia; Lasser, Robert; Turgay, Atilla; Findling, Robert L. – Journal of Attention Disorders, 2012
Objective: To describe clinically relevant effects of lisdexamfetamine dimesylate (LDX) on emotional expression (EE) in children with ADHD. Method: Children with ADHD participated in a 7-week, open-label, LDX dose-optimization study. Expression and Emotion Scale for Children (EESC) change scores were analyzed post hoc using two methods to…
Descriptors: Measurement, Error of Measurement, Emotional Response, Drug Therapy
Peer reviewed Peer reviewed
Direct linkDirect link
Han, Kyung T. – Journal of Educational Measurement, 2012
Successful administration of computerized adaptive testing (CAT) programs in educational settings requires that test security and item exposure control issues be taken seriously. Developing an item selection algorithm that strikes the right balance between test precision and level of item pool utilization is the key to successful implementation…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Selection
Peer reviewed Peer reviewed
Direct linkDirect link
Brazeau, James N.; Teatero, Missy L.; Rawana, Edward P.; Brownlee, Keith; Blanchette, Loretta R. – Journal of Child and Family Studies, 2012
A new measure, the Strengths Assessment Inventory-Youth self-report (SAI-Y), was recently developed to assess the strengths of children and adolescents between the ages of 10 and 18 years. The SAI-Y differs from similar measures in that it provides a comprehensive assessment of strengths that are intrinsic to the individual as well as strengths…
Descriptors: Error of Measurement, Psychometrics, Secondary School Students, Adolescents
Peer reviewed Peer reviewed
Direct linkDirect link
Fan, Weihua; Hancock, Gregory R. – Journal of Educational and Behavioral Statistics, 2012
This study proposes robust means modeling (RMM) approaches for hypothesis testing of mean differences for between-subjects designs in order to control the biasing effects of nonnormality and variance inequality. Drawing from structural equation modeling (SEM), the RMM approaches make no assumption of variance homogeneity and employ robust…
Descriptors: Robustness (Statistics), Hypothesis Testing, Monte Carlo Methods, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Arce, Alvaro J.; Wang, Ze – International Journal of Testing, 2012
The traditional approach to scale modified-Angoff cut scores transfers the raw cuts to an existing raw-to-scale score conversion table. Under the traditional approach, cut scores and conversion table raw scores are not only seen as interchangeable but also as originating from a common scaling process. In this article, we propose an alternative…
Descriptors: Generalizability Theory, Item Response Theory, Cutting Scores, Scaling
Peer reviewed Peer reviewed
Direct linkDirect link
Brown, Allison R.; Finney, Sara J. – International Journal of Testing, 2011
The current study examined whether psychological reactance differs across compliant and non-compliant examinees. Given the lack of consensus regarding the factor structure and scoring of the Hong Psychological Reactance Scale (HPRS), its factor structure was evaluated and subsequently tested for measurement invariance (configural, metric, and…
Descriptors: Testing, Factor Structure, Measures (Individuals), Compliance (Psychology)
Peer reviewed Peer reviewed
Direct linkDirect link
Hedges, Larry V. – Journal of Educational and Behavioral Statistics, 2011
Research designs involving cluster randomization are becoming increasingly important in educational and behavioral research. Many of these designs involve two levels of clustering or nesting (students within classes and classes within schools). Researchers would like to compute effect size indexes based on the standardized mean difference to…
Descriptors: Effect Size, Research Design, Experiments, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Brennan, Robert L. – Applied Measurement in Education, 2011
Broadly conceived, reliability involves quantifying the consistencies and inconsistencies in observed scores. Generalizability theory, or G theory, is particularly well suited to addressing such matters in that it enables an investigator to quantify and distinguish the sources of inconsistencies in observed scores that arise, or could arise, over…
Descriptors: Generalizability Theory, Test Theory, Test Reliability, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Yuan, Ke-Hai; Chan, Wai – Psychometrika, 2011
The paper obtains consistent standard errors (SE) and biases of order O(1/n) for the sample standardized regression coefficients with both random and given predictors. Analytical results indicate that the formulas for SEs given in popular text books are consistent only when the population value of the regression coefficient is zero. The sample…
Descriptors: Statistical Bias, Error of Measurement, Regression (Statistics), Predictor Variables
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Jinghua; Sinharay, Sandip; Holland, Paul; Feigenbaum, Miriam; Curley, Edward – Educational and Psychological Measurement, 2011
Two different types of anchors are investigated in this study: a mini-version anchor and an anchor that has a less spread of difficulty than the tests to be equated. The latter is referred to as a midi anchor. The impact of these two different types of anchors on observed score equating are evaluated and compared with respect to systematic error…
Descriptors: Equated Scores, Test Items, Difficulty Level, Statistical Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Bentler, Peter M.; Yuan, Ke-Hai – Psychometrika, 2011
Indefinite symmetric matrices that are estimates of positive-definite population matrices occur in a variety of contexts such as correlation matrices computed from pairwise present missing data and multinormal based methods for discretized variables. This note describes a methodology for scaling selected off-diagonal rows and columns of such a…
Descriptors: Scaling, Factor Analysis, Correlation, Predictor Variables
Peer reviewed Peer reviewed
Direct linkDirect link
Svetina, Dubravka; Rutkowski, Leslie – Large-scale Assessments in Education, 2014
Background: When studying student performance across different countries or cultures, an important aspect for comparisons is that of score comparability. In other words, it is imperative that the latent variable (i.e., construct of interest) is understood and measured equivalently across all participating groups or countries, if our inferences…
Descriptors: Test Items, Item Response Theory, Item Analysis, Regression (Statistics)
Peer reviewed Peer reviewed
Direct linkDirect link
Cabrera, Nolan L.; Milem, Jeffrey F.; Jaquette, Ozan; Marx, Ronald W. – American Educational Research Journal, 2014
The Arizona legislature passed HB 2281, which eliminated Tucson Unified School District's (TUSD's) Mexican American Studies (MAS) program, arguing the curriculum was too political. This program has been at the center of contentious debates, but a central question has not been thoroughly examined: Do the classes raise student achievement? The…
Descriptors: Academic Achievement, Mexican Americans, Mexican American Education, Politics of Education
Pages: 1  |  ...  |  85  |  86  |  87  |  88  |  89  |  90  |  91  |  92  |  93  |  ...  |  222