Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 10 |
Descriptor
Source
Author
Dorans, Neil J. | 6 |
Bennett, Randy Elliot | 4 |
Cook, Linda L. | 4 |
Angoff, William H. | 3 |
Kobrin, Jennifer L. | 3 |
Donlon, Thomas F. | 2 |
Eignor, Daniel R. | 2 |
Kostin, Irene | 2 |
Kulick, Edward | 2 |
Lawrence, Ida M. | 2 |
Liu, Jinghua | 2 |
More ▼ |
Publication Type
Education Level
Higher Education | 11 |
Postsecondary Education | 9 |
High Schools | 5 |
Secondary Education | 5 |
Audience
Researchers | 4 |
Location
New York | 2 |
Botswana | 1 |
United Kingdom (England) | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Domingue, Benjamin W.; Kanopka, Klint; Stenhaug, Ben; Sulik, Michael J.; Beverly, Tanesia; Brinkhuis, Matthieu; Circi, Ruhan; Faul, Jessica; Liao, Dandan; McCandliss, Bruce; Obradovic, Jelena; Piech, Chris; Porter, Tenelle; Soland, James; Weeks, Jon; Wise, Steven L.; Yeatman, Jason – Journal of Educational and Behavioral Statistics, 2022
The speed-accuracy trade-off (SAT) suggests that time constraints reduce response accuracy. Its relevance in observational settings--where response time (RT) may not be constrained but respondent speed may still vary--is unclear. Using 29 data sets containing data from cognitive tasks, we use a flexible method for identification of the SAT (which…
Descriptors: Accuracy, Reaction Time, Task Analysis, College Entrance Examinations
Schaefer, Hillary S.; Callina, Kristina Schmid; Powers, Jeremiah; Kobylski, Gerald; Ryan, Diane; Lerner, Richard M. – Journal of Character Education, 2021
A developmental model linking professionalism and character growth is implemented at the U.S. Military Academy at West Point (USMA) through its programs and policies. The current study used growth-modeling techniques to examine developmental relationships between metrics of professional development and performance outcomes at USMA and assessed…
Descriptors: Military Schools, Grade Point Average, Professionalism, Values Education
Engelhard, George, Jr.; Kobrin, Jennifer L.; Wind, Stefanie A. – International Journal of Testing, 2014
The purpose of this study is to explore patterns in model-data fit related to subgroups of test takers from a large-scale writing assessment. Using data from the SAT, a calibration group was randomly selected to represent test takers who reported that English was their best language from the total population of test takers (N = 322,011). A…
Descriptors: College Entrance Examinations, Writing Tests, Goodness of Fit, English
Moses, Tim; Liu, Jinghua; Tan, Adele; Deng, Weiling; Dorans, Neil J. – ETS Research Report Series, 2013
In this study, differential item functioning (DIF) methods utilizing 14 different matching variables were applied to assess DIF in the constructed-response (CR) items from 6 forms of 3 mixed-format tests. Results suggested that the methods might produce distinct patterns of DIF results for different tests and testing programs, in that the DIF…
Descriptors: Test Construction, Multiple Choice Tests, Test Items, Item Analysis
Scarfone, Melissa Delores – ProQuest LLC, 2013
The purpose of this study was to investigate if there are differences in how cognitive and noncognitive variables predict academic performance for college students with learning disabilities. In particular, this study examined the extent to which the cognitive variables of high school grade point average and SAT (combined verbal and math) or ACT…
Descriptors: Predictor Variables, Cognitive Ability, Cognitive Processes, Academic Achievement
Kobrin, Jennifer L.; Kim, YoungKoung; Sackett, Paul R. – Educational and Psychological Measurement, 2012
There is much debate on the merits and pitfalls of standardized tests for college admission, with questions regarding the format (multiple-choice vs. constructed response), cognitive complexity, and content of these assessments (achievement vs. aptitude) at the forefront of the discussion. This study addressed these questions by investigating the…
Descriptors: Grade Point Average, Standardized Tests, Predictive Validity, Predictor Variables
Adedoyin, O. O. – Educational Research and Reviews, 2010
This is a quantitative study, which attempted to detect gender bias test items from the Botswana Junior Certificate Examination in mathematics. To detect gender bias test items, a randomly selected sample of 4000 students responses to mathematics paper 1 of the Botswana Junior Certificate examination were selected from 36,000 students who sat for…
Descriptors: Test Items, Foreign Countries, Statistical Analysis, Gender Bias
Gierl, Mark J.; Leighton, Jacqueline P.; Wang, Changjiang; Zhou, Jiawen; Gokiert, Rebecca; Tan, Adele – College Board, 2009
The purpose of the study is to present research focused on validating the four algebra cognitive models in Gierl, Wang, et al., using student response data collected with protocol analysis methods to evaluate the knowledge structures and processing skills used by a sample of SAT test takers.
Descriptors: Algebra, Mathematics Tests, College Entrance Examinations, Student Attitudes
Kobrin, Jennifer L.; Schmidt, Amy Elizabeth – College Board, 2007
This report provides a brief summary of the research projects that have been conducted to support the development of the new SAT.
Descriptors: College Entrance Examinations, Educational Research, Educational Change, Research Projects

Cook, Linda L.; And Others – Journal of Educational Statistics, 1988
First- and second-order factor analyses were conducted, using the LISREL model, on correlation matrices among item parcels of verbal items of the Scholastic Aptitude Test. Focus was on determining whether statistical dependence among item scores can be explained by a single ability dimension. Results suggest future research possibilities related…
Descriptors: Factor Analysis, Item Analysis, Latent Trait Theory, Verbal Tests

Dorans, Neil J.; Kulick, Edward – Journal of Educational Measurement, 1986
The standardization method for assessing unexpected differential item performance or differential item functioning is introduced. Findings of five studies are summarized, in which the statistical method of standardization is used to look for unexpected differences in item performance across different subpopulations of the Scholastic Aptitude Test.…
Descriptors: Groups, Item Analysis, Sociometric Techniques, Standardized Tests
Cook, Linda L.; And Others – 1982
Data from the Scholastic Aptitude Test-Verbal (SAT-V), SAT Mathematics (SAT-M), and Achievement Tests in Biology, American History, and Social Studies were used for this study. The temporal stability of item parameter estimates obtained for the same set of items calibrated for different examinees at different times was analyzed. It was believed…
Descriptors: Achievement Tests, Aptitude Tests, Equated Scores, Item Analysis
Liu, Jinghua; Schuppan, Fred; Walker, Michael E. – College Board, 2005
This study explored whether the addition of the items with more advanced math content to the SAT Reasoning Test™ (SAT®) would impact test-taker performance. Two sets of SAT math equating sections were modified to form four subforms each. Different numbers of items with advanced content, taken from the SAT II: Mathematics Level IC Test (Math IC),…
Descriptors: College Entrance Examinations, Mathematics Tests, Test Items, Difficulty Level

Schmitt, Alicia P. – Journal of Educational Measurement, 1988
Standardized methodology was used to help identify item characteristics explaining differential item functioning among Hispanics on Scholastic Aptitude Test (SAT), in two studies with 284,359 and 292,725 Whites, Mexican-Americans, and Puerto Ricans. Results indicate true cognates, or words with a common root in English and Spanish; and content of…
Descriptors: College Entrance Examinations, Cultural Influences, Hispanic Americans, Item Analysis

Dorans, Neil J. – Journal of Educational Measurement, 1986
The analytical decomposition demonstrates how the effects of item characteristics, test properties, individual examinee responses, and rounding rules combine to produce the item deletion effect on the equating/scaling function and candidate scores. The empirical portion of the report illustrates the effects of item deletion on reported score…
Descriptors: Difficulty Level, Equated Scores, Item Analysis, Latent Trait Theory