Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 6 |
Descriptor
Source
College Board | 8 |
Author
Dorans, Neil J. | 2 |
Hendrickson, Amy | 2 |
Kim, YoungKoung | 2 |
Melican, Gerald | 2 |
Antal, Judit | 1 |
Brennan, Robert L. | 1 |
DeCarlo, Lawrence T. | 1 |
Ewing, Maureen | 1 |
Feigenbaum, Miriam | 1 |
Kobrin, Jennifer L. | 1 |
Lee, Eunjung | 1 |
More ▼ |
Publication Type
Reports - Research | 7 |
Numerical/Quantitative Data | 3 |
Non-Print Media | 1 |
Reference Materials - General | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Higher Education | 6 |
Postsecondary Education | 6 |
High Schools | 3 |
Secondary Education | 3 |
Elementary Education | 1 |
Grade 8 | 1 |
Junior High Schools | 1 |
Middle Schools | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
SAT (College Admission Test) | 4 |
National Merit Scholarship… | 2 |
Preliminary Scholastic… | 2 |
What Works Clearinghouse Rating
Kim, YoungKoung; DeCarlo, Lawrence T. – College Board, 2016
Because of concerns about test security, different test forms are typically used across different testing occasions. As a result, equating is necessary in order to get scores from the different test forms that can be used interchangeably. In order to assure the quality of equating, multiple equating methods are often examined. Various equity…
Descriptors: Equated Scores, Evaluation Methods, Sampling, Statistical Inference
Lee, Eunjung; Lee, Won-Chan; Brennan, Robert L. – College Board, 2012
In almost all high-stakes testing programs, test equating is necessary to ensure that test scores across multiple test administrations are equivalent and can be used interchangeably. Test equating becomes even more challenging in mixed-format tests, such as Advanced Placement Program® (AP®) Exams, that contain both multiple-choice and constructed…
Descriptors: Test Construction, Test Interpretation, Test Norms, Test Reliability
Kim, YoungKoung; Hendrickson, Amy; Patel, Priyank; Melican, Gerald; Sweeney, Kevin – College Board, 2013
The purpose of this report is to describe the procedure for revising the ReadiStep™ score scale using the field trial data, and to provide technical information about the development of the new ReadiStep scale score. In doing so, this report briefly introduces the three assessments--ReadiStep, PSAT/NMSQT®, and SAT®--in the College Board Pathway…
Descriptors: College Entrance Examinations, Educational Assessment, High School Students, Scores
Antal, Judit; Melican, Gerald; Proctor, Thomas; Wiley, Andrew – College Board, 2010
Presented at the Annual Meeting of National Council on Measurement in Education (NCME) in 2010. The focus of the research is to investigate the effect of applying the Sinharay & Holland (2007) midi-test idea for building anchor tests to an on-going testing program with a series of versions of the test and comparing these results to the more…
Descriptors: Test Items, Equated Scores, Test Construction, Simulation
Hendrickson, Amy; Patterson, Brian; Ewing, Maureen – College Board, 2010
The psychometric considerations and challenges associated with including constructed response items on tests are discussed along with how these issues affect the form assembly specifications for mixed-format exams. Reliability and validity, security and fairness, pretesting, content and skills coverage, test length and timing, weights, statistical…
Descriptors: Multiple Choice Tests, Test Format, Test Construction, Test Validity
Kobrin, Jennifer L.; Melican, Gerald J. – College Board, 2007
This report synthesizes the research to date addressing the construct comparability of the SAT Reasoning Test and prior SAT I: Reasoning Test and the series of research studies addressing the equatability and subpopulation invariance of the SAT and SAT I.
Descriptors: College Entrance Examinations, Logical Thinking, Thinking Skills, Scores
Zhang, Yanling; Dorans, Neil J.; Matthews-López, Joy L. – College Board, 2005
Statistical procedures for detecting differential item functioning (DIF) are often used as an initial step to screen items for construct irrelevant variance. This research applies a DIF dissection method and a two-way classification scheme to SAT Reasoning Test™ verbal section data and explores the effects of deleting sizable DIF items on reported…
Descriptors: Test Bias, Test Items, Statistical Analysis, Classification
Liu, Jinghua; Feigenbaum, Miriam; Dorans, Neil J. – College Board, 2005
Score equity assessment was used to evaluate linkings of new SAT® to the current SAT Reasoning Test™. Population invariance across gender groups was studied on the linkage of a new SAT critical reading prototype to a current SAT verbal section, and on the linkage of a new SAT math prototype to a current SAT math section. The results indicated that…
Descriptors: Gender Differences, Research Reports, Cognitive Tests, College Entrance Examinations