NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)0
Since 2006 (last 20 years)6
Location
Tennessee1
Laws, Policies, & Programs
Comprehensive Employment and…1
Assessments and Surveys
Praxis Series1
What Works Clearinghouse Rating
Showing all 14 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Piccone, Jason E. – Journal of Correctional Education, 2015
The effective evaluation of correctional programs is critically important. However, research in corrections rarely allows for the randomization of offenders to conditions of the study. This limitation compromises internal validity, and thus, causal conclusions can rarely be drawn. Increasingly, researchers are employing propensity score matching…
Descriptors: Correctional Education, Program Evaluation, Probability, Scores
Jacob, Robin Tepper; Smith, Thomas J.; Willard, Jacklyn A.; Rifkin, Rachel E. – MDRC, 2014
This policy brief summarizes the positive results of a rigorous evaluation of Reading Partners, a widely used program that offers one-on-one tutoring provided by community volunteers to struggling readers in low-income elementary schools. A total of 1,265 students in 19 schools in three states were randomly assigned to receive Reading…
Descriptors: Reading Instruction, Reading Programs, Volunteers, Tutoring
Tennessee Higher Education Commission, 2012
The Tennessee General Assembly passed legislation in 2007 requiring that the State Board of Education produce an assessment on the effectiveness of teacher training programs. The law requires that the report include data on the performance of each program's graduates in the following areas: placement and retention rates, Praxis II results, and…
Descriptors: Program Effectiveness, Teacher Evaluation, Praxis, Academic Achievement
Peer reviewed Peer reviewed
Direct linkDirect link
Peck, Laura R.; Camillo, Furio; D'Attoma, Ida – Canadian Journal of Program Evaluation, 2009
This article presents a creative and practical process for dealing with the problem of selection bias. Taking an algorithmic approach and capitalizing on the known treatment-associated variance in the X matrix, we propose a data transformation that allows estimating unbiased treatment effects. The approach does not call for modelling the data,…
Descriptors: Program Evaluation, Intervention, Experiments, Comparative Analysis
Tuttle, Christina Clark; Gleason, Philip; Knechtel, Virginia; Nichols-Barrer, Ira; Booker, Kevin; Chojnacki, Gregory; Coen, Thomas; Goble, Lisbeth – Mathematica Policy Research, Inc., 2015
KIPP (Knowledge is Power Program) is a national network of public charter schools whose stated mission is to help underserved students enroll in and graduate from college. Prior studies (see Tuttle et al. 2013) have consistently found that attending a KIPP middle school positively affects student achievement, but few have addressed longer-term…
Descriptors: Program Effectiveness, Program Evaluation, Academic Achievement, Charter Schools
Tuttle, Christina Clark; Gleason, Philip; Knechtel, Virginia; Nichols-Barrer, Ira; Booker, Kevin; Chojnacki, Gregory; Coen, Thomas; Goble, Lisbeth – Mathematica Policy Research, Inc., 2015
KIPP (Knowledge is Power Program) is a national network of public charter schools whose stated mission is to help underserved students enroll in and graduate from college. Prior studies (see Tuttle et al. 2013) have consistently found that attending a KIPP middle school positively affects student achievement, but few have addressed longer-term…
Descriptors: Academic Achievement, Charter Schools, Educational Innovation, Institutional Characteristics
Peer reviewed Peer reviewed
Pack, Elbert; Stander, Aaron – NASSP Bulletin, 1981
Describes how to measure whether students are making significant gains in reading. (JM)
Descriptors: Academic Achievement, Measurement Techniques, Program Evaluation, Reading Programs
Peer reviewed Peer reviewed
Posavac, E. J. – Evaluation and Program Planning, 1998
Misuses of null hypothesis significance testing are reviewed and alternative approaches are suggested for carrying out and reporting statistical tests that might be useful to program evaluators. Several themes, including the importance of respecting the magnitude of Type II errors and describing effect sizes in units stakeholders can understand,…
Descriptors: Effect Size, Evaluation Methods, Hypothesis Testing, Program Evaluation
Hanes, John C.; Hail, Michael – 1999
Many program evaluations involve some type of statistical testing to verify that the program has succeeded in accomplishing initially established goals. In many cases, this takes the form of null hypothesis significance testing (NHST) with t-tests, analysis of variance, or some form of the general linear model. This paper contends that, at least…
Descriptors: Change, Educational Indicators, Evaluation Methods, Hypothesis Testing
McClure, Charles R.; Lankes, R. David; Gross, Melissa; Choltco-Devlin, Beverly – 2002
This manual is a first effort to begin to identify, describe, and develop procedures for assessing various aspects of digital reference service. Its overall purpose is to improve the quality of digital reference services and assist librarians to design and implement better digital reference services. More specifically, its aim is to: assist…
Descriptors: Electronic Libraries, Evaluation Criteria, Evaluation Methods, Guidelines
National Academy of Sciences - National Research Council, Washington, DC. – 1995
The Workshop on Integrating Federal Statistics on Children provided a forum for assessing strengths and shortcomings of existing and proposed federal statistical data sources for children and families. In particular, these data sources were assessed with respect to their capacity to fill the most pressing information needs of those who formulate,…
Descriptors: Child Abuse, Child Development, Child Health, Children
Peer reviewed Peer reviewed
Katz, Richard S.; Eagles, Munroe – PS: Political Science and Politics, 1996
Constructs a model that explains a large fraction of the variance in political science departmental rankings. Divides the objective predictors into two sets: one reflecting faculty quality ratings of department members, the other the effects of circumstances beyond a department's control. This model works well with most social science disciplines.…
Descriptors: Achievement Rating, Analysis of Variance, Causal Models, Credentials
Peer reviewed Peer reviewed
Lowry, Robert C.; Silver, Brian D. – PS: Political Science and Politics, 1996
Asserts that variance between a university's reputation as an institution and its commitment to research have a greater impact on political science department rankings than any internal factors within the department. Includes several tables showing statistical variables of department and university rankings. (MJP)
Descriptors: Academic Education, Achievement Rating, Analysis of Variance, Credibility
Peer reviewed Peer reviewed
Jackman, Robert W.; Siverson, Randolph M. – PS: Political Science and Politics, 1996
Analyzes the National Research Council's rating of political science departments and discovers the ratings reflect two general sets of characteristics, the size and productivity of the faculty. Reveals that the quality and impact of faculty research is more important than the overall output. Includes tables of statistical data. (MJP)
Descriptors: Achievement Rating, Analysis of Variance, Credentials, Departments