Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 5 |
Since 2006 (last 20 years) | 12 |
Descriptor
Computer Assisted Testing | 13 |
Models | 13 |
English (Second Language) | 6 |
Language Tests | 6 |
Scoring | 6 |
Second Language Learning | 6 |
Correlation | 4 |
Essays | 4 |
Item Response Theory | 4 |
Simulation | 4 |
Test Reliability | 4 |
More ▼ |
Source
ETS Research Report Series | 13 |
Author
Attali, Yigal | 2 |
Breyer, F. Jay | 2 |
Ackerman, Debra J. | 1 |
Bauer, Malcolm | 1 |
Blanchard, Daniel | 1 |
Carol Eckerly | 1 |
Casabianca, Jodi M. | 1 |
Duchnowski, Matthew | 1 |
Evanini, Keelan | 1 |
Handwerk, Phil | 1 |
Hao, Jiangang | 1 |
More ▼ |
Publication Type
Journal Articles | 13 |
Reports - Research | 13 |
Numerical/Quantitative Data | 1 |
Tests/Questionnaires | 1 |
Education Level
Audience
Location
Germany | 1 |
Switzerland | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Test of English as a Foreign… | 5 |
Graduate Record Examinations | 1 |
National Merit Scholarship… | 1 |
Preliminary Scholastic… | 1 |
What Works Clearinghouse Rating
Carol Eckerly; Yue Jia; Paul Jewsbury – ETS Research Report Series, 2022
Testing programs have explored the use of technology-enhanced items alongside traditional item types (e.g., multiple-choice and constructed-response items) as measurement evidence of latent constructs modeled with item response theory (IRT). In this report, we discuss considerations in applying IRT models to a particular type of adaptive testlet…
Descriptors: Computer Assisted Testing, Test Items, Item Response Theory, Scoring
Hao, Jiangang; Smith, Lawrence; Mislevy, Robert; von Davier, Alina; Bauer, Malcolm – ETS Research Report Series, 2016
Extracting information efficiently from game/simulation-based assessment (G/SBA) logs requires two things: a well-structured log file and a set of analysis methods. In this report, we propose a generic data model specified as an extensible markup language (XML) schema for the log files of G/SBAs. We also propose a set of analysis methods for…
Descriptors: Evaluation Methods, Games, Computer Assisted Testing, Data Collection
Reckase, Mark D. – ETS Research Report Series, 2017
A common interpretation of achievement test results is that they provide measures of achievement that are much like other measures we commonly use for height, weight, or the cost of goods. In a limited sense, such interpretations are correct, but some nuances of these interpretations have important implications for the use of achievement test…
Descriptors: Models, Achievement Tests, Test Results, Test Construction
Rupp, André A.; Casabianca, Jodi M.; Krüger, Maleika; Keller, Stefan; Köller, Olaf – ETS Research Report Series, 2019
In this research report, we describe the design and empirical findings for a large-scale study of essay writing ability with approximately 2,500 high school students in Germany and Switzerland on the basis of 2 tasks with 2 associated prompts, each from a standardized writing assessment whose scoring involved both human and automated components.…
Descriptors: Automation, Foreign Countries, English (Second Language), Language Tests
Ackerman, Debra J. – ETS Research Report Series, 2020
Over the past 8 years, U.S. kindergarten classrooms have been impacted by policies mandating or recommending the administration of a specific kindergarten entry assessment (KEA) in the initial months of school as well as the increasing reliance on digital technology in the form of mobile apps, touchscreen devices, and online data platforms. Using…
Descriptors: Kindergarten, School Readiness, Computer Assisted Testing, Preschool Teachers
Breyer, F. Jay; Attali, Yigal; Williamson, David M.; Ridolfi-McCulla, Laura; Ramineni, Chaitanya; Duchnowski, Matthew; Harris, April – ETS Research Report Series, 2014
In this research, we investigated the feasibility of implementing the "e-rater"® scoring engine as a check score in place of all-human scoring for the "Graduate Record Examinations"® ("GRE"®) revised General Test (rGRE) Analytical Writing measure. This report provides the scientific basis for the use of e-rater as a…
Descriptors: Computer Software, Computer Assisted Testing, Scoring, College Entrance Examinations
Evanini, Keelan; Heilman, Michael; Wang, Xinhao; Blanchard, Daniel – ETS Research Report Series, 2015
This report describes the initial automated scoring results that were obtained using the constructed responses from the Writing and Speaking sections of the pilot forms of the "TOEFL Junior"® Comprehensive test administered in late 2011. For all of the items except one (the edit item in the Writing section), existing automated scoring…
Descriptors: Computer Assisted Testing, Automation, Language Tests, Second Language Learning
Zhang, Mo; Breyer, F. Jay; Lorenz, Florian – ETS Research Report Series, 2013
In this research, we investigated the suitability of implementing "e-rater"® automated essay scoring in a high-stakes large-scale English language testing program. We examined the effectiveness of generic scoring and 2 variants of prompt-based scoring approaches. Effectiveness was evaluated on a number of dimensions, including agreement…
Descriptors: Computer Assisted Testing, Computer Software, Scoring, Language Tests
Rotou, Ourania; Patsula, Liane; Steffen, Manfred; Rizavi, Saba – ETS Research Report Series, 2007
Traditionally, the fixed-length linear paper-and-pencil (P&P) mode of administration has been the standard method of test delivery. With the advancement of technology, however, the popularity of administering tests using adaptive methods like computerized adaptive testing (CAT) and multistage testing (MST) has grown in the field of measurement…
Descriptors: Comparative Analysis, Test Format, Computer Assisted Testing, Models
Stricker, Lawrence J.; Rock, Donald A. – ETS Research Report Series, 2008
This study assessed the invariance in the factor structure of the "Test of English as a Foreign Language"™ Internet-based test (TOEFL® iBT) across subgroups of test takers who differed in native language and exposure to the English language. The subgroups were defined by (a) Indo-European and Non-Indo-European language family, (b)…
Descriptors: Factor Structure, English (Second Language), Language Tests, Computer Assisted Testing
Handwerk, Phil – ETS Research Report Series, 2007
Online high schools are growing significantly in number, popularity, and function. However, little empirical data has been published about the effectiveness of these institutions. This research examined the frequency of group work and extended essay writing among online Advanced Placement Program® (AP®) students, and how these tasks may have…
Descriptors: Advanced Placement Programs, Advanced Placement, Computer Assisted Testing, Models
Attali, Yigal – ETS Research Report Series, 2007
This study examined the construct validity of the "e-rater"® automated essay scoring engine as an alternative to human scoring in the context of TOEFL® essay writing. Analyses were based on a sample of students who repeated the TOEFL within a short time period. Two "e-rater" scores were investigated in this study, the first…
Descriptors: Construct Validity, Computer Assisted Testing, Scoring, English (Second Language)
von Davier, Matthias – ETS Research Report Series, 2005
Probabilistic models with more than one latent variable are designed to report profiles of skills or cognitive attributes. Testing programs want to offer additional information beyond what a single test score can provide using these skill profiles. Many recent approaches to skill profile models are limited to dichotomous data and have made use of…
Descriptors: Models, Diagnostic Tests, Language Tests, Language Proficiency