NotesFAQContact Us
Collection
Advanced
Search Tips
Source
ETS Research Report Series195
Audience
Laws, Policies, & Programs
No Child Left Behind Act 20012
What Works Clearinghouse Rating
Showing 1 to 15 of 195 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Papageorgiou, Spiros; Davis, Larry; Ohta, Renka; Gomez, Pablo Garcia – ETS Research Report Series, 2022
In this research report, we describe a study to map the scores of the "TOEFL® Essentials"™ test to the Canadian Language Benchmarks (CLB). The TOEFL Essentials test is a four-skills assessment of foundational English language skills and communication abilities in academic and general (daily life) contexts. At the time of writing this…
Descriptors: Foreign Countries, Language Tests, English (Second Language), Second Language Learning
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Yanxuan Qu; Sandip Sinharay – ETS Research Report Series, 2023
Though a substantial amount of research exists on imputing missing scores in educational assessments, there is little research on cases where responses or scores to an item are missing for all test takers. In this paper, we tackled the problem of imputing missing scores for tests for which the responses to an item are missing for all test takers.…
Descriptors: Scores, Test Items, Accuracy, Psychometrics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Hongwen Guo; Matthew S. Johnson; Daniel F. McCaffrey; Lixong Gu – ETS Research Report Series, 2024
The multistage testing (MST) design has been gaining attention and popularity in educational assessments. For testing programs that have small test-taker samples, it is challenging to calibrate new items to replenish the item pool. In the current research, we used the item pools from an operational MST program to illustrate how research studies…
Descriptors: Test Items, Test Construction, Sample Size, Scaling
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Jing Miao; Sandip Sinharay; Chris Kelbaugh; Yi Cao; Wei Wang – ETS Research Report Series, 2023
In a targeted double-scoring procedure for performance assessments that are used for licensure and certification purposes, a subset of responses receives an independent second rating if the first rating falls into a preidentified critical score range (CSR) where an additional rating would lead to considerably more reliable pass-fail decisions.…
Descriptors: Scoring, Performance Based Assessment, Licensing Examinations (Professions), Certification
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ling, Guangming; Williams, Jean; O'Brien, Sue; Cavalie, Carlos F. – ETS Research Report Series, 2022
Recognizing the appealing features of a tablet (e.g., an iPad), including size, mobility, touch screen display, and virtual keyboard, more educational professionals are moving away from larger laptop and desktop computers and turning to the iPad for their daily work, such as reading and writing. Following the results of a recent survey of…
Descriptors: Tablet Computers, Computers, Essays, Scoring
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ching-Ni Hsieh – ETS Research Report Series, 2024
The TOEFL Junior® tests are designed to evaluate young language students' English reading, listening, speaking, and writing skills in an English-medium secondary instructional context. This paper articulates a validity argument constructed to support the use and interpretation of the TOEFL Junior test scores for the purpose of placement, progress…
Descriptors: English (Second Language), Language Tests, Second Language Learning, Scores
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Guo, Hongwen; Dorans, Neil J. – ETS Research Report Series, 2019
We derive formulas for the differential item functioning (DIF) measures that two routinely used DIF statistics are designed to estimate. The DIF measures that match on observed scores are compared to DIF measures based on an unobserved ability (theta or true score) for items that are described by either the one-parameter logistic (1PL) or…
Descriptors: Scores, Test Bias, Statistical Analysis, Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Klieger, David M.; Kotloff, Lauren J.; Belur, Vinetha; Schramm-Possinger, Megan E.; Holtzman, Steven L.; Bunde, Hezekiah – ETS Research Report Series, 2022
Intended consequences of giving applicants the option to select which test scores to report include potentially reducing measurement error and inequity in applicants' prior test familiarity. Our first study determined whether score choice options resulted in unintended consequences for lower performing subgroups by detrimentally increasing score…
Descriptors: College Entrance Examinations, Graduate Study, Scores, High Stakes Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Donoghue, John R.; McClellan, Catherine A.; Hess, Melinda R. – ETS Research Report Series, 2022
When constructed-response items are administered for a second time, it is necessary to evaluate whether the current Time B administration's raters have drifted from the scoring of the original administration at Time A. To study this, Time A papers are sampled and rescored by Time B scorers. Commonly the scores are compared using the proportion of…
Descriptors: Item Response Theory, Test Construction, Scoring, Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Attali, Yigal – ETS Research Report Series, 2020
Principles of skill acquisition dictate that raters should be provided with frequent feedback about their ratings. However, in current operational practice, raters rarely receive immediate feedback about their scores owing to the prohibitive effort required to generate such feedback. An approach for generating and administering feedback responses…
Descriptors: Feedback (Response), Evaluators, Accuracy, Scores
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ching-Ni Hsieh – ETS Research Report Series, 2023
Research in validity suggests that stakeholders' interpretation and use of test results should be an aspect of validity. Claims about the meaningfulness of test score interpretations and consequences of test use should be backed by evidence that stakeholders understand the definition of the construct assessed and the score report information. The…
Descriptors: Foreign Countries, Language Proficiency, English (Second Language), Language Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Guo, Hongwen; Rios, Joseph A.; Ling, Guangming; Wang, Zhen; Gu, Lin; Yang, Zhitong; Liu, Lydia O. – ETS Research Report Series, 2022
Different variants of the selected-response (SR) item type have been developed for various reasons (i.e., simulating realistic situations, examining critical-thinking and/or problem-solving skills). Generally, the variants of SR item format are more complex than the traditional multiple-choice (MC) items, which may be more challenging to test…
Descriptors: Test Format, Test Wiseness, Test Items, Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Fu, Jianbin; Feng, Yuling – ETS Research Report Series, 2018
In this study, we propose aggregating test scores with unidimensional within-test structure and multidimensional across-test structure based on a 2-level, 1-factor model. In particular, we compare 6 score aggregation methods: average of standardized test raw scores (M1), regression factor score estimate of the 1-factor model based on the…
Descriptors: Comparative Analysis, Scores, Correlation, Standardized Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kim, Sooyeon; Walker, Michael – ETS Research Report Series, 2021
In this investigation, we used real data to assess potential differential effects associated with taking a test in a test center (TC) versus testing at home using remote proctoring (RP). We used a pseudo-equivalent groups (PEG) approach to examine group equivalence at the item level and the total score level. If our assumption holds that the PEG…
Descriptors: Testing, Distance Education, Comparative Analysis, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Buzick, Heather – ETS Research Report Series, 2021
The "Praxis"® Core Academic Skills for Educators (Core) tests are used in the teacher preparation program admissions process and as part of initial teacher licensure. The purpose of this study was to estimate the relationship between scores on Praxis Core tests and Praxis Subject Assessments and to test for differential prediction by…
Descriptors: Teacher Certification, Licensing Examinations (Professions), Prediction, Teacher Education Programs
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  12  |  13