NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 6 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Longford, Nicholas T. – Journal of Educational and Behavioral Statistics, 2015
An equating procedure for a testing program with evolving distribution of examinee profiles is developed. No anchor is available because the original scoring scheme was based on expert judgment of the item difficulties. Pairs of examinees from two administrations are formed by matching on coarsened propensity scores derived from a set of…
Descriptors: Equated Scores, Testing Programs, College Entrance Examinations, Scoring
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Guo, Hongwen; Liu, Jinghua; Dorans, Neil; Feigenbaum, Miriam – ETS Research Report Series, 2011
Maintaining score stability is crucial for an ongoing testing program that administers several tests per year over many years. One way to stall the drift of the score scale is to use an equating design with multiple links. In this study, we use the operational and experimental SAT® data collected from 44 administrations to investigate the effect…
Descriptors: Equated Scores, College Entrance Examinations, Reliability, Testing Programs
Peer reviewed Peer reviewed
Direct linkDirect link
Albano, Anthony D. – Journal of Educational Measurement, 2013
In many testing programs it is assumed that the context or position in which an item is administered does not have a differential effect on examinee responses to the item. Violations of this assumption may bias item response theory estimates of item and person parameters. This study examines the potentially biasing effects of item position. A…
Descriptors: Test Items, Item Response Theory, Test Format, Questioning Techniques
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Moses, Tim; Liu, Jinghua; Tan, Adele; Deng, Weiling; Dorans, Neil J. – ETS Research Report Series, 2013
In this study, differential item functioning (DIF) methods utilizing 14 different matching variables were applied to assess DIF in the constructed-response (CR) items from 6 forms of 3 mixed-format tests. Results suggested that the methods might produce distinct patterns of DIF results for different tests and testing programs, in that the DIF…
Descriptors: Test Construction, Multiple Choice Tests, Test Items, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Ajuonuma, Juliet O. – African Higher Education Review, 2008
This study was designed to carry out a survey of the implementation of continuous assessment (CA) in Nigerian universities. Two research questions and one hypothesis were formulated to guide the study. The sample for the study consisted of 1,340 respondents. A 24 item self-report instrument was used for the study. The data generated, were analyzed…
Descriptors: Foreign Countries, Program Implementation, Testing Programs, Test Items
Lawrence, Ida M.; Rigol, Gretchen W.; Van Essen, Thomas; Jackson, Carol A. – College Entrance Examination Board, 2003
This paper provides an historical perspective on the content of the SAT. The review begins at the beginning, when the first College Board SAT (the Scholastic Aptitude Test) was administered to 8,040 students on June 23, 1926. At that time, the SAT consisted of nine subtests: Definitions, Arithmetical Problems, Classification, Artificial Language,…
Descriptors: Research Reports, Educational History, Test Content, Aptitude Tests