Publication Date
In 2025 | 1 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 5 |
Since 2016 (last 10 years) | 5 |
Since 2006 (last 20 years) | 5 |
Descriptor
Evaluation Methods | 5 |
Research Methodology | 5 |
Accuracy | 2 |
Models | 2 |
Statistical Analysis | 2 |
Admission (School) | 1 |
Adolescents | 1 |
Automation | 1 |
Children | 1 |
Comparative Analysis | 1 |
Competitive Selection | 1 |
More ▼ |
Source
Grantee Submission | 5 |
Author
Andres De Los Reyes | 1 |
Anna Shapiro | 1 |
Anne-Marie Faria | 1 |
Arthur C. Grasser | 1 |
Breno Braga | 1 |
Brian A. Jacob | 1 |
Bridget A. Makol | 1 |
Christina Weiland | 1 |
Danielle S. McNamara | 1 |
Erica Greenberg | 1 |
Howard Bloom | 1 |
More ▼ |
Publication Type
Journal Articles | 2 |
Reports - Descriptive | 2 |
Reports - Research | 2 |
Reports - Evaluative | 1 |
Education Level
Early Childhood Education | 1 |
Preschool Education | 1 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Lingbo Tong; Wen Qu; Zhiyong Zhang – Grantee Submission, 2025
Factor analysis is widely utilized to identify latent factors underlying the observed variables. This paper presents a comprehensive comparative study of two widely used methods for determining the optimal number of factors in factor analysis, the K1 rule, and parallel analysis, along with a more recently developed method, the bass-ackward method.…
Descriptors: Factor Analysis, Monte Carlo Methods, Statistical Analysis, Sample Size
Laura K. Allen; Arthur C. Grasser; Danielle S. McNamara – Grantee Submission, 2023
Assessments of natural language can provide vast information about individuals' thoughts and cognitive process, but they often rely on time-intensive human scoring, deterring researchers from collecting these sources of data. Natural language processing (NLP) gives researchers the opportunity to implement automated textual analyses across a…
Descriptors: Psychological Studies, Natural Language Processing, Automation, Research Methodology
Christina Weiland; Rebecca Unterman; Susan Dynarski; Rachel Abenavoli; Howard Bloom; Breno Braga; Anne-Marie Faria; Erica Greenberg; Brian A. Jacob; Jane Arnold Lincove; Karen Manship; Meghan McCormick; Luke Miratrix; Tomás E. Monarrez; Pamela Morris-Perez; Anna Shapiro; Jon Valant; Lindsay Weixler – Grantee Submission, 2024
Lottery-based identification strategies offer potential for generating the next generation of evidence on U.S. early education programs. The authors' collaborative network of five research teams applying this design in early education settings and methods experts has identified six challenges that need to be carefully considered in this next…
Descriptors: Early Childhood Education, Program Evaluation, Evaluation Methods, Admission (School)
Andres De Los Reyes; Mo Wang; Matthew D. Lerner; Bridget A. Makol; Olivia M. Fitzpatrick; John R. Weisz – Grantee Submission, 2022
Researchers strategically assess youth mental health by soliciting reports from multiple informants. Typically, these informants (e.g., parents, teachers, youth themselves) vary in the social contexts where they observe youth. Decades of research reveal that the most common data conditions produced with this approach consist of discrepancies…
Descriptors: Mental Health, Measurement Techniques, Evaluation Methods, Research
Jacob M. Schauer; Kaitlyn G. Fitzgerald; Sarah Peko-Spicer; Mena C. R. Whalen; Rrita Zejnullahi; Larry V. Hedges – Grantee Submission, 2021
Several programs of research have sought to assess the replicability of scientific findings in different fields, including economics and psychology. These programs attempt to replicate several findings and use the results to say something about large-scale patterns of replicability in a field. However, little work has been done to understand the…
Descriptors: Statistical Analysis, Research Methodology, Evaluation Methods, Replication (Evaluation)