NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 271 to 285 of 1,057 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Eckerly, Carol; Smith, Russell; Sowles, John – Practical Assessment, Research & Evaluation, 2018
The Discrete Option Multiple Choice (DOMC) item format was introduced by Foster and Miller (2009) with the intent of improving the security of test content. However, by changing the amount and order of the content presented, the test taking experience varies by test taker, thereby introducing potential fairness issues. In this paper we…
Descriptors: Culture Fair Tests, Multiple Choice Tests, Testing, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lopez, Alexis A.; Guzman-Orth, Danielle; Zapata-Rivera, Diego; Forsyth, Carolyn M.; Luce, Christine – ETS Research Report Series, 2021
Substantial progress has been made toward applying technology enhanced conversation-based assessments (CBAs) to measure the English-language proficiency of English learners (ELs). CBAs are conversation-based systems that use conversations among computer-animated agents and a test taker. We expanded the design and capability of prior…
Descriptors: Accuracy, English Language Learners, Language Proficiency, Language Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Zhu, Xuelian; Aryadoust, Vahid – Computer Assisted Language Learning, 2022
A fundamental requirement of language assessments which is underresearched in computerized assessments is impartiality (fairness) or equal treatment of test takers regardless of background. The present study aimed to evaluate fairness in the Pearson Test of English (PTE) Academic Reading test, which is a computerized reading assessment, by…
Descriptors: Computer Assisted Testing, Language Tests, Native Language, Culture Fair Tests
O'Malley, Fran; Norton, Scott – American Institutes for Research, 2022
This paper provides the National Center for Education Statistics (NCES), National Assessment Governing Board (NAGB), and the National Assessment of Educational Progress (NAEP) community with information that may help maintain the validity and utility of the NAEP assessments for civics and U.S. history as revisions are planned to the NAEP…
Descriptors: National Competency Tests, United States History, Test Validity, Governing Boards
Schulz, Wolfram; Fraillon, Julian; Losito, Bruno; Agrusti, Gabriella; Ainley, John; Damiani, Valeria; Friedman, Tim – International Association for the Evaluation of Educational Achievement, 2022
The purpose of the International Civic and Citizenship Education Study (ICCS) is to investigate the changing ways in which young people are prepared to undertake their roles as citizens across a wide range of countries. This assessment framework provides a conceptual underpinning for the international instrumentation for ICCS 2022. It needs to…
Descriptors: Citizenship Education, Context Effect, Guidelines, Course Content
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lopez, Alexis A.; Tolentino, Florencia – ETS Research Report Series, 2020
In this study we investigated how English learners (ELs) interacted with "®" summative English language arts (ELA) and mathematics items, the embedded online tools, and accessibility features. We focused on how EL students navigated the assessment items; how they selected or constructed their responses; how they interacted with the…
Descriptors: English Language Learners, Student Evaluation, Language Arts, Summative Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Yasuno, Fumiko; Nishimura, Keiichi; Negami, Seiya; Namikawa, Yukihiko – International Journal for Technology in Mathematics Education, 2019
Our study is on developing mathematics items for Computer-Based Testing (CBT) using Tablet PC. These items are subject-based items using interactive dynamic objects. The purpose of this study is to obtain some suggestions for further tasks drawing on field test results for developed items. First, we clarified the role of the interactive dynamic…
Descriptors: Mathematics Instruction, Mathematics Tests, Test Items, Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Toker, Deniz – TESL-EJ, 2019
The central purpose of this paper is to examine validity problems arising from the multiple-choice items and technical passages in the Test of English as a Foreign Language Internet-based Test (TOEFL iBT) reading section, primarily concentrating on construct-irrelevant variance (Messick, 1989). My personal TOEFL iBT experience, along with my…
Descriptors: English (Second Language), Language Tests, Second Language Learning, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Wise, Steven L. – Educational Measurement: Issues and Practice, 2017
The rise of computer-based testing has brought with it the capability to measure more aspects of a test event than simply the answers selected or constructed by the test taker. One behavior that has drawn much research interest is the time test takers spend responding to individual multiple-choice items. In particular, very short response…
Descriptors: Guessing (Tests), Multiple Choice Tests, Test Items, Reaction Time
Wang, Keyin – ProQuest LLC, 2017
The comparison of item-level computerized adaptive testing (CAT) and multistage adaptive testing (MST) has been researched extensively (e.g., Kim & Plake, 1993; Luecht et al., 1996; Patsula, 1999; Jodoin, 2003; Hambleton & Xing, 2006; Keng, 2008; Zheng, 2012). Various CAT and MST designs have been investigated and compared under the same…
Descriptors: Comparative Analysis, Computer Assisted Testing, Adaptive Testing, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Chao; Lu, Hong – Educational Technology & Society, 2018
This study focused on the effect of examinees' ability levels on the relationship between Reflective-Impulsive (RI) cognitive style and item response time in computerized adaptive testing (CAT). The total of 56 students majoring in Educational Technology from Shandong Normal University participated in this study, and their RI cognitive styles were…
Descriptors: Item Response Theory, Computer Assisted Testing, Cognitive Style, Correlation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Çekiç, Ahmet; Bakla, Arif – International Online Journal of Education and Teaching, 2021
The Internet and the software stores for mobile devices come with a huge number of digital tools for any task, and those intended for digital formative assessment (DFA) have burgeoned exponentially in the last decade. These tools vary in terms of their functionality, pedagogical quality, cost, operating systems and so forth. Teachers and learners…
Descriptors: Formative Evaluation, Futures (of Society), Computer Assisted Testing, Guidance
Peer reviewed Peer reviewed
Direct linkDirect link
Kaya, Elif; O'Grady, Stefan; Kalender, Ilker – Language Testing, 2022
Language proficiency testing serves an important function of classifying examinees into different categories of ability. However, misclassification is to some extent inevitable and may have important consequences for stakeholders. Recent research suggests that classification efficacy may be enhanced substantially using computerized adaptive…
Descriptors: Item Response Theory, Test Items, Language Tests, Classification
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Istiyono, Edi; Dwandaru, Wipsar Sunu Brams; Lede, Yulita Adelfin; Rahayu, Farida; Nadapdap, Amipa – International Journal of Instruction, 2019
The objective of this study was to develop Physics critical thinking skill test using computerized adaptive test (CAT) based on item response theory (IRT). This research was a development research using 4-D (define, design, develop, and disseminate). The content validity of the items was proven using Aiken's V. The test trial involved 252 students…
Descriptors: Critical Thinking, Thinking Skills, Cognitive Tests, Physics
Peer reviewed Peer reviewed
Direct linkDirect link
Russell, Michael – Journal of Applied Testing Technology, 2016
Interest in and use of technology-enhanced items has increased over the past decade. Given the additional time required to administer many technology-enhanced items and the increased expense required to develop them, it is important for testing programs to consider the utility of technology-enhanced items. The Technology-Enhanced Item Utility…
Descriptors: Test Items, Computer Assisted Testing, Models, Fidelity
Pages: 1  |  ...  |  15  |  16  |  17  |  18  |  19  |  20  |  21  |  22  |  23  |  ...  |  71