Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 3 |
Since 2006 (last 20 years) | 5 |
Descriptor
Item Banks | 7 |
Test Items | 6 |
Comparative Analysis | 4 |
Computation | 3 |
Computer Assisted Testing | 3 |
Adaptive Testing | 2 |
Regression (Statistics) | 2 |
Scores | 2 |
Statistical Analysis | 2 |
Antisocial Behavior | 1 |
Case Studies | 1 |
More ▼ |
Source
ETS Research Report Series | 7 |
Author
Chang, Hua-Hua | 2 |
Kim, Sooyeon | 2 |
Zhang, Jinming | 2 |
Davey, Tim | 1 |
Herbert, Erin | 1 |
Houghton, Patrick | 1 |
Jiang, Yanming | 1 |
Krovetz, Bob | 1 |
Lee, Chong Min | 1 |
Lopez, Melissa | 1 |
Loukina, Anastasia | 1 |
More ▼ |
Publication Type
Journal Articles | 7 |
Reports - Research | 7 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
Test of English for… | 1 |
What Works Clearinghouse Rating
Kim, Sooyeon; Lu, Ru – ETS Research Report Series, 2018
The purpose of this study was to evaluate the effectiveness of linking test scores by using test takers' background data to form pseudo-equivalent groups (PEG) of test takers. Using 4 operational test forms that each included 100 items and were taken by more than 30,000 test takers, we created 2 half-length research forms that had either 20…
Descriptors: Test Items, Item Banks, Difficulty Level, Comparative Analysis
Kim, Sooyeon; Robin, Frederic – ETS Research Report Series, 2017
In this study, we examined the potential impact of item misfit on the reported scores of an admission test from the subpopulation invariance perspective. The target population of the test consisted of 3 major subgroups with different geographic regions. We used the logistic regression function to estimate item parameters of the operational items…
Descriptors: Scores, Test Items, Test Bias, International Assessment
Yoon, Su-Youn; Lee, Chong Min; Houghton, Patrick; Lopez, Melissa; Sakano, Jennifer; Loukina, Anastasia; Krovetz, Bob; Lu, Chi; Madani, Nitin – ETS Research Report Series, 2017
In this study, we developed assistive tools and resources to support TOEIC® Listening test item generation. There has recently been an increased need for a large pool of items for these tests. This need has, in turn, inspired efforts to increase the efficiency of item generation while maintaining the quality of the created items. We aimed to…
Descriptors: Natural Language Processing, Language Tests, Item Banks, Pilot Projects
Qian, Jiahe; Jiang, Yanming; von Davier, Alina A. – ETS Research Report Series, 2013
Several factors could cause variability in item response theory (IRT) linking and equating procedures, such as the variability across examinee samples and/or test items, seasonality, regional differences, native language diversity, gender, and other demographic variables. Hence, the following question arises: Is it possible to select optimal…
Descriptors: Item Response Theory, Test Items, Sampling, True Scores
Yi, Qing; Zhang, Jinming; Chang, Hua-Hua – ETS Research Report Series, 2006
Chang and Zhang (2002, 2003) proposed several baseline criteria for assessing the severity of possible test security violations for computerized tests with high-stakes outcomes. However, these criteria were obtained from theoretical derivations that assumed uniformly randomized item selection. The current study investigated potential damage caused…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Computer Security
Zhang, Jinming; Chang, Hua-Hua – ETS Research Report Series, 2005
This paper compares the use of multiple pools versus a single pool with respect to test security against large-scale item sharing among some examinees in a computer-based test, under the assumption that a randomized item selection method is used. It characterizes the conditions under which employing multiple pools is better than using a single…
Descriptors: Comparative Analysis, Test Items, Item Banks, Computer Assisted Testing
Rizavi, Saba; Way, Walter D.; Davey, Tim; Herbert, Erin – ETS Research Report Series, 2004
Item parameter estimates vary for a variety of reasons, including estimation error, characteristics of the examinee samples, and context effects (e.g., item location effects, section location effects, etc.). Although we expect variation based on theory, there is reason to believe that observed variation in item parameter estimates exceeds what…
Descriptors: Test Items, Computer Assisted Testing, Computation, Adaptive Testing