NotesFAQContact Us
Collection
Advanced
Search Tips
Location
Ohio2
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 18 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Westrick, Paul A.; Schmidt, Frank L.; Le, Huy; Robbins, Steven B.; Radunzel, Justine M. R. – Educational Assessment, 2021
This meta-analytic path analysis presents evidence that first-year academic performance (FYAP), measured by first-year grade point average (FYGPA) plays the major role in determining second-year student retention and that socioeconomic status (SES), measured by parental income, plays a negligible role. Based on large sample data used in a previous…
Descriptors: Meta Analysis, Academic Achievement, Academic Persistence, Grade Point Average
Moore, Joann L.; Li, Tianli; Lu, Yang – ACT, Inc., 2020
The Every Student Succeeds Act requires that English Learners (ELs) are included in annual state testing (grades 3-8 and once in high school) and included in each state's accountability system disaggregated by subgroup to ensure that they receive the support they need to learn English, participate fully in their education experience, and graduate…
Descriptors: College Entrance Examinations, Scores, English Language Learners, Accountability
Peer reviewed Peer reviewed
Direct linkDirect link
Westrick, Paul A. – Educational Assessment, 2017
Undergraduate grade point average (GPA) is a commonly employed measure in educational research, serving as a criterion or as a predictor depending on the research question. Over the decades, researchers have used a variety of reliability coefficients to estimate the reliability of undergraduate GPA, which suggests that there has been no consensus…
Descriptors: Undergraduate Students, Test Reliability, College Entrance Examinations, Longitudinal Studies
Peer reviewed Peer reviewed
Direct linkDirect link
Coggins, Joanne V.; Kim, Jwa K.; Briggs, Laura C. – Research in the Schools, 2017
The Gates-MacGinitie Reading Comprehension Test, fourth edition (GMRT-4) and the ACT Reading Tests (ACT-R) were administered to 423 high school students in order to explore the similarities and dissimilarities of data produced through classical test theory (CTT) and item response theory (IRT) analysis. Despite the many advantages of IRT…
Descriptors: Item Response Theory, Test Theory, Reading Comprehension, Reading Tests
Powers, Sonya; Li, Dongmei; Suh, Hongwook; Harris, Deborah J. – ACT, Inc., 2016
ACT reporting categories and ACT Readiness Ranges are new features added to the ACT score reports starting in fall 2016. For each reporting category, the number correct score, the maximum points possible, the percent correct, and the ACT Readiness Range, along with an indicator of whether the reporting category score falls within the Readiness…
Descriptors: Scores, Classification, College Entrance Examinations, Error of Measurement
Woodruff, David; Traynor, Anne; Cui, Zhongmin; Fang, Yu – ACT, Inc., 2013
Professional standards for educational testing recommend that both the overall standard error of measurement and the conditional standard error of measurement (CSEM) be computed on the score scale used to report scores to examinees. Several methods have been developed to compute scale score CSEMs. This paper compares three methods, based on…
Descriptors: Comparative Analysis, Error of Measurement, Scores, Scaling
Peer reviewed Peer reviewed
Direct linkDirect link
Ip, Edward Hak-Sing; Chen, Shyh-Huei – Applied Psychological Measurement, 2012
The problem of fitting unidimensional item-response models to potentially multidimensional data has been extensively studied. The focus of this article is on response data that contains a major dimension of interest but that may also contain minor nuisance dimensions. Because fitting a unidimensional model to multidimensional data results in…
Descriptors: Measurement, Item Response Theory, Scores, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Kolen, Michael J.; Wang, Tianyou; Lee, Won-Chan – International Journal of Testing, 2012
Composite scores are often formed from test scores on educational achievement test batteries to provide a single index of achievement over two or more content areas or two or more item types on that test. Composite scores are subject to measurement error, and as with scores on individual tests, the amount of error variability typically depends on…
Descriptors: Mathematics Tests, Achievement Tests, College Entrance Examinations, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Webber, Douglas A. – Economics of Education Review, 2012
Using detailed individual-level data from public universities in the state of Ohio, I estimate the effect of various institutional expenditures on the probability of graduating from college. Using a competing risks regression framework, I find differential impacts of expenditure categories across student characteristics. I estimate that student…
Descriptors: Student Characteristics, Educational Finance, Measurement, Probability
Webber, Douglas A. – Cornell Higher Education Research Institute, 2011
Using detailed individual-level data from public universities in the state of Ohio, I estimate the effect of various institutional expenditures on the probability of graduating from college. Using a competing risks regression framework, I find differential impacts of expenditure categories across student characteristics. I estimate that student…
Descriptors: Public Colleges, Educational Finance, Cost Effectiveness, College Administration
Briggs, Derek C. – National Association for College Admission Counseling, 2009
This discussion paper represents one of the National Association for College Admission Counseling's (NACAC's) first post-Testing Commission steps in advancing the knowledge base and dialogue about test preparation. It describes various types of test preparation programs and summarizes the existing academic research on the effects of test…
Descriptors: Testing, Standardized Tests, School Counselors, College Admission
Tsai, Tsung-Hsun – 1997
The primary objective of this study was to find the smallest sample size for which equating based on a random groups design could be expected to result in less overall equating error than had no equating been conducted. Mean, linear, and equipercentile equating methods were considered. Some of the analyses presented in this paper assumed that the…
Descriptors: Equated Scores, Error of Measurement, Estimation (Mathematics), Sample Size
Go, Imelda C.; Woodruff, David J. – 1996
In previous works, D. J. Woodruff derived expressions for three different conditional test score variances: (1) the conditional standard error of prediction (CSEP); (2) the conditional standard error of measurement in prediction (CSEMP); and (3) the conditional standard error of estimation (CSEE). He also presented step-up formulas that require…
Descriptors: College Entrance Examinations, Error of Measurement, Estimation (Mathematics), High School Students
Pommerich, Mary; Hanson, Bradley A.; Harris, Deborah J.; Sconing, James A. – 1999
This paper focuses on methodological issues in applying equipercentile equating methods to pairs of tests that do not meet the assumptions of equating. This situation is referred to as a concordance situation, as opposed to an equating situation, and the end result is a concordance table that gives "comparable" scores between the tests.…
Descriptors: College Entrance Examinations, Comparative Analysis, Equated Scores, Error of Measurement
Peer reviewed Peer reviewed
Kolen, Michael J.; And Others – Journal of Educational Measurement, 1992
A procedure is described for estimating the reliability and conditional standard errors of measurement of scale scores incorporating the discrete transformation of raw scores to scale scores. The method is illustrated using a strong true score model, and practical applications are described. (SLD)
Descriptors: College Entrance Examinations, Equations (Mathematics), Error of Measurement, Estimation (Mathematics)
Previous Page | Next Page ยป
Pages: 1  |  2