Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 8 |
Since 2006 (last 20 years) | 17 |
Descriptor
Computer Assisted Testing | 25 |
Test Items | 25 |
Item Response Theory | 8 |
Statistical Analysis | 8 |
Adaptive Testing | 7 |
Item Analysis | 7 |
English (Second Language) | 6 |
Language Tests | 6 |
Scores | 6 |
Scoring | 6 |
Simulation | 6 |
More ▼ |
Source
ETS Research Report Series | 25 |
Author
Chang, Hua-Hua | 4 |
Ali, Usama S. | 2 |
Davey, Tim | 2 |
Guzman-Orth, Danielle | 2 |
Lopez, Alexis A. | 2 |
Yamamoto, Kentaro | 2 |
Zhang, Jinming | 2 |
von Davier, Matthias | 2 |
Ackerman, Debra J. | 1 |
Adler, Rachel | 1 |
Anderson, Carolyn J. | 1 |
More ▼ |
Publication Type
Journal Articles | 25 |
Reports - Research | 25 |
Tests/Questionnaires | 2 |
Education Level
Higher Education | 4 |
Postsecondary Education | 4 |
Secondary Education | 4 |
Elementary Education | 2 |
Junior High Schools | 2 |
Middle Schools | 2 |
Early Childhood Education | 1 |
Grade 7 | 1 |
Kindergarten | 1 |
Primary Education | 1 |
Audience
Location
New Jersey | 2 |
Pennsylvania | 2 |
Australia | 1 |
China | 1 |
Delaware | 1 |
France | 1 |
Germany | 1 |
Illinois | 1 |
Japan | 1 |
Louisiana (New Orleans) | 1 |
Maryland | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
Graduate Record Examinations | 4 |
Test of English as a Foreign… | 2 |
Program for International… | 1 |
What Works Clearinghouse Rating
Carol Eckerly; Yue Jia; Paul Jewsbury – ETS Research Report Series, 2022
Testing programs have explored the use of technology-enhanced items alongside traditional item types (e.g., multiple-choice and constructed-response items) as measurement evidence of latent constructs modeled with item response theory (IRT). In this report, we discuss considerations in applying IRT models to a particular type of adaptive testlet…
Descriptors: Computer Assisted Testing, Test Items, Item Response Theory, Scoring
Gu, Lixiong; Ling, Guangming; Qu, Yanxuan – ETS Research Report Series, 2019
Research has found that the "a"-stratified item selection strategy (STR) for computerized adaptive tests (CATs) may lead to insufficient use of high a items at later stages of the tests and thus to reduced measurement precision. A refined approach, unequal item selection across strata (USTR), effectively improves test precision over the…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Use, Test Items
Zhou, Jiawen; Cao, Yi – ETS Research Report Series, 2020
In this study, we explored retest effects on test scores and response time for repeaters, examinees who retake an examination. We looked at two groups of repeaters: those who took the same form twice and those who took different forms on their two attempts for a certification and licensure test. Scores improved over the two test attempts, and…
Descriptors: Testing, Test Items, Computer Assisted Testing, Licensing Examinations (Professions)
Guzman-Orth, Danielle; Song, Yi; Sparks, Jesse R. – ETS Research Report Series, 2019
In this study, we investigated the challenges and opportunities in developing a computer-delivered English language arts (ELA) task intended to improve the accessibility of the task for middle school English learners (ELs). Data from cognitive labs with 8 ELs with varying language proficiency levels provided rich insight to student-- task…
Descriptors: Formative Evaluation, Test Construction, Test Items, Persuasive Discourse
Ali, Usama S.; Chang, Hua-Hua; Anderson, Carolyn J. – ETS Research Report Series, 2015
Polytomous items are typically described by multiple category-related parameters; situations, however, arise in which a single index is needed to describe an item's location along a latent trait continuum. Situations in which a single index would be needed include item selection in computerized adaptive testing or test assembly. Therefore single…
Descriptors: Item Response Theory, Test Items, Computer Assisted Testing, Adaptive Testing
Lopez, Alexis A.; Guzman-Orth, Danielle; Zapata-Rivera, Diego; Forsyth, Carolyn M.; Luce, Christine – ETS Research Report Series, 2021
Substantial progress has been made toward applying technology enhanced conversation-based assessments (CBAs) to measure the English-language proficiency of English learners (ELs). CBAs are conversation-based systems that use conversations among computer-animated agents and a test taker. We expanded the design and capability of prior…
Descriptors: Accuracy, English Language Learners, Language Proficiency, Language Tests
Lopez, Alexis A.; Tolentino, Florencia – ETS Research Report Series, 2020
In this study we investigated how English learners (ELs) interacted with "®" summative English language arts (ELA) and mathematics items, the embedded online tools, and accessibility features. We focused on how EL students navigated the assessment items; how they selected or constructed their responses; how they interacted with the…
Descriptors: English Language Learners, Student Evaluation, Language Arts, Summative Evaluation
Yamamoto, Kentaro; He, Qiwei; Shin, Hyo Jeong; von Davier, Mattias – ETS Research Report Series, 2017
Approximately a third of the Programme for International Student Assessment (PISA) items in the core domains (math, reading, and science) are constructed-response items and require human coding (scoring). This process is time-consuming, expensive, and prone to error as often (a) humans code inconsistently, and (b) coding reliability in…
Descriptors: Foreign Countries, Achievement Tests, International Assessment, Secondary School Students
Ackerman, Debra J. – ETS Research Report Series, 2018
Kindergarten entry assessments (KEAs) have increasingly been incorporated into state education policies over the past 5 years, with much of this interest stemming from Race to the Top--Early Learning Challenge (RTT-ELC) awards, Enhanced Assessment Grants, and nationwide efforts to develop common K-12 state learning standards. Drawing on…
Descriptors: Screening Tests, Kindergarten, Test Validity, Test Reliability
Swiggett, Wanda D.; Kotloff, Laurie; Ezzo, Chelsea; Adler, Rachel; Oliveri, Maria Elena – ETS Research Report Series, 2014
The computer-based "Graduate Record Examinations"® ("GRE"®) revised General Test includes interactive item types and testing environment tools (e.g., test navigation, on-screen calculator, and help). How well do test takers understand these innovations? If test takers do not understand the new item types, these innovations may…
Descriptors: College Entrance Examinations, Graduate Study, Usability, Test Items
Ali, Usama S.; Chang, Hua-Hua – ETS Research Report Series, 2014
Adaptive testing is advantageous in that it provides more efficient ability estimates with fewer items than linear testing does. Item-driven adaptive pretesting may also offer similar advantages, and verification of such a hypothesis about item calibration was the main objective of this study. A suitability index (SI) was introduced to adaptively…
Descriptors: Adaptive Testing, Simulation, Pretests Posttests, Test Items
Attali, Yigal – ETS Research Report Series, 2014
Previous research on calculator use in standardized assessments of quantitative ability focused on the effect of calculator availability on item difficulty and on whether test developers can predict these effects. With the introduction of an on-screen calculator on the Quantitative Reasoning measure of the "GRE"® revised General Test, it…
Descriptors: College Entrance Examinations, Graduate Study, Calculators, Test Items
Davey, Tim; Lee, Yi-Hsuan – ETS Research Report Series, 2011
Both theoretical and practical considerations have led the revision of the Graduate Record Examinations® (GRE®) revised General Test, here called the rGRE, to adopt a multistage adaptive design that will be continuously or nearly continuously administered and that can provide immediate score reporting. These circumstances sharply constrain the…
Descriptors: Context Effect, Scoring, Equated Scores, College Entrance Examinations
Sukkarieh, Jane Z.; von Davier, Matthias; Yamamoto, Kentaro – ETS Research Report Series, 2012
This document describes a solution to a problem in the automatic content scoring of the multilingual character-by-character highlighting item type. This solution is language independent and represents a significant enhancement. This solution not only facilitates automatic scoring but plays an important role in clustering students' responses;…
Descriptors: Scoring, Multilingualism, Test Items, Role
Zhang, Mo; Breyer, F. Jay; Lorenz, Florian – ETS Research Report Series, 2013
In this research, we investigated the suitability of implementing "e-rater"® automated essay scoring in a high-stakes large-scale English language testing program. We examined the effectiveness of generic scoring and 2 variants of prompt-based scoring approaches. Effectiveness was evaluated on a number of dimensions, including agreement…
Descriptors: Computer Assisted Testing, Computer Software, Scoring, Language Tests
Previous Page | Next Page »
Pages: 1 | 2