NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 68 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Erdem-Kara, Basak; Dogan, Nuri – International Journal of Assessment Tools in Education, 2022
Recently, adaptive test approaches have become a viable alternative to traditional fixed-item tests. The main advantage of adaptive tests is that they reach desired measurement precision with fewer items. However, fewer items mean that each item has a more significant effect on ability estimation and therefore those tests are open to more…
Descriptors: Item Analysis, Computer Assisted Testing, Test Items, Test Construction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Wolkowitz, Amanda A.; Foley, Brett; Zurn, Jared – Practical Assessment, Research & Evaluation, 2023
The purpose of this study is to introduce a method for converting scored 4-option multiple-choice (MC) items into scored 3-option MC items without re-pretesting the 3-option MC items. This study describes a six-step process for achieving this goal. Data from a professional credentialing exam was used in this study and the method was applied to 24…
Descriptors: Multiple Choice Tests, Test Items, Accuracy, Test Format
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Jonathan Trace – Language Teaching Research Quarterly, 2023
The role of context in cloze tests has long been seen as both a benefit as well as a complication in their usefulness as a measure of second language comprehension (Brown, 2013). Passage cohesion, in particular, would seem to have a relevant and important effect on the degree to which cloze items function and the interpretability of performances…
Descriptors: Language Tests, Cloze Procedure, Connected Discourse, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Raymond, Mark R.; Stevens, Craig; Bucak, S. Deniz – Advances in Health Sciences Education, 2019
Research suggests that the three-option format is optimal for multiple choice questions (MCQs). This conclusion is supported by numerous studies showing that most distractors (i.e., incorrect answers) are selected by so few examinees that they are essentially nonfunctional. However, nearly all studies have defined a distractor as nonfunctional if…
Descriptors: Multiple Choice Tests, Credentials, Test Format, Test Items
National Academies Press, 2022
The National Assessment of Educational Progress (NAEP) -- often called "The Nation's Report Card" -- is the largest nationally representative and continuing assessment of what students in public and private schools in the United States know and can do in various subjects and has provided policy makers and the public with invaluable…
Descriptors: Costs, Futures (of Society), National Competency Tests, Educational Trends
Peer reviewed Peer reviewed
Direct linkDirect link
Arce-Ferrer, Alvaro J.; Bulut, Okan – Journal of Experimental Education, 2019
This study investigated the performance of four widely used data-collection designs in detecting test-mode effects (i.e., computer-based versus paper-based testing). The experimental conditions included four data-collection designs, two test-administration modes, and the availability of an anchor assessment. The test-level and item-level results…
Descriptors: Data Collection, Test Construction, Test Format, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Ahyoung Alicia; Tywoniw, Rurik L.; Chapman, Mark – Language Assessment Quarterly, 2022
Technology-enhanced items (TEIs) are innovative, computer-delivered test items that allow test takers to better interact with the test environment compared to traditional multiple-choice items (MCIs). The interactive nature of TEIs offer improved construct coverage compared with MCIs but little research exists regarding students' performance on…
Descriptors: Language Tests, Test Items, Computer Assisted Testing, English (Second Language)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Liao, Linyu – English Language Teaching, 2020
As a high-stakes standardized test, IELTS is expected to have comparable forms of test papers so that test takers from different test administration on different dates receive comparable test scores. Therefore, this study examined the text difficulty and task characteristics of four parallel academic IELTS reading tests to reveal to what extent…
Descriptors: Second Language Learning, English (Second Language), Language Tests, High Stakes Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Gu, Lin; Ling, Guangming; Liu, Ou Lydia; Yang, Zhitong; Li, Guirong; Kardanova, Elena; Loyalka, Prashant – Assessment & Evaluation in Higher Education, 2021
We examine the effects of computer-based versus paper-based assessment of critical thinking skills, adapted from English (in the U.S.) to Chinese. Using data collected based on a random assignment between the two modes in multiple Chinese colleges, we investigate mode effects from multiple perspectives: mean scores, measurement precision, item…
Descriptors: Critical Thinking, Tests, Test Format, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Yasuno, Fumiko; Nishimura, Keiichi; Negami, Seiya; Namikawa, Yukihiko – International Journal for Technology in Mathematics Education, 2019
Our study is on developing mathematics items for Computer-Based Testing (CBT) using Tablet PC. These items are subject-based items using interactive dynamic objects. The purpose of this study is to obtain some suggestions for further tasks drawing on field test results for developed items. First, we clarified the role of the interactive dynamic…
Descriptors: Mathematics Instruction, Mathematics Tests, Test Items, Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Karagöl, Efecan – Journal of Language and Linguistic Studies, 2020
Turkish and Foreign Languages Research and Application Center (TÖMER) is one of the important institutions for learning Turkish as a foreign language. In these institutions, proficiency tests are applied at the end of each level. However, test applications in TÖMERs vary between each center as there is no shared program in teaching Turkish as a…
Descriptors: Language Tests, Turkish, Language Proficiency, Second Language Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Constantinou, Filio – Cambridge Journal of Education, 2020
Written examinations represent one of the most common assessment tools in education. Though typically perceived as measurement instruments, written examinations are primarily texts that perform a communicative function. To complement existing research, this study viewed written examinations as a distinct form of communication (i.e. 'register').…
Descriptors: Sociolinguistics, Linguistic Theory, Test Items, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Gierl, Mark J.; Lai, Hollis; Pugh, Debra; Touchie, Claire; Boulais, André-Philippe; De Champlain, André – Applied Measurement in Education, 2016
Item development is a time- and resource-intensive process. Automatic item generation integrates cognitive modeling with computer technology to systematically generate test items. To date, however, items generated using cognitive modeling procedures have received limited use in operational testing situations. As a result, the psychometric…
Descriptors: Psychometrics, Multiple Choice Tests, Test Items, Item Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Shilo, Gila – Educational Research Quarterly, 2015
The purpose of the study was to examine the quality of open test questions directed to high school and college students. One thousand five hundred examination questions from various fields of study were examined using criteria based on the writing centers directions and guidelines. The 273 questions that did not fulfill the criteria were analyzed…
Descriptors: Questioning Techniques, Questionnaires, Test Construction, High School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Keller, Lisa A.; Keller, Robert R. – Applied Measurement in Education, 2015
Equating test forms is an essential activity in standardized testing, with increased importance with the accountability systems in existence through the mandate of Adequate Yearly Progress. It is through equating that scores from different test forms become comparable, which allows for the tracking of changes in the performance of students from…
Descriptors: Item Response Theory, Rating Scales, Standardized Tests, Scoring Rubrics
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5