NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
No Child Left Behind Act 20012
What Works Clearinghouse Rating
Does not meet standards1
Showing 106 to 120 of 362 results Save | Export
Susanti, Yuni; Tokunaga, Takenobu; Nishikawa, Hitoshi; Obari, Hiroyuki – Research and Practice in Technology Enhanced Learning, 2017
The present study investigates the best factor for controlling the item difficulty of multiple-choice English vocabulary questions generated by an automatic question generation system. Three factors are considered for controlling item difficulty: (1) reading passage difficulty, (2) semantic similarity between the correct answer and distractors,…
Descriptors: Test Items, Difficulty Level, Computer Assisted Testing, Vocabulary Development
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Teneqexhi, Romeo; Qirko, Margarita; Sharko, Genci; Vrapi, Fatmir; Kuneshka, Loreta – International Association for Development of the Information Society, 2017
Exams assessment is one of the most tedious work for university teachers all over the world. Multiple choice theses make exams assessment a little bit easier, but the teacher cannot prepare more than 3-4 variants; in this case, the possibility of students for cheating from one another becomes a risk for "objective assessment outcome." On…
Descriptors: Testing, Computer Assisted Testing, Test Items, Test Construction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Eckerly, Carol; Smith, Russell; Sowles, John – Practical Assessment, Research & Evaluation, 2018
The Discrete Option Multiple Choice (DOMC) item format was introduced by Foster and Miller (2009) with the intent of improving the security of test content. However, by changing the amount and order of the content presented, the test taking experience varies by test taker, thereby introducing potential fairness issues. In this paper we…
Descriptors: Culture Fair Tests, Multiple Choice Tests, Testing, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Caspari-Sadeghi, Sima; Forster-Heinlein, Brigitte; Maegdefrau, Jutta; Bachl, Lena – International Journal for the Scholarship of Teaching and Learning, 2021
This action research study presents the findings of using a formative assessment strategy in an online mathematic course during the world-wide outbreak of COVID-19 at the University of Passau, Germany. The main goals of this study were: (1) to enhance students' self-regulated learning by shifting the direction of assessment from instructors to the…
Descriptors: Foreign Countries, Online Courses, COVID-19, Pandemics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Toker, Deniz – TESL-EJ, 2019
The central purpose of this paper is to examine validity problems arising from the multiple-choice items and technical passages in the Test of English as a Foreign Language Internet-based Test (TOEFL iBT) reading section, primarily concentrating on construct-irrelevant variance (Messick, 1989). My personal TOEFL iBT experience, along with my…
Descriptors: English (Second Language), Language Tests, Second Language Learning, Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
El Rassi, Mary Ann Barbour – International Association for Development of the Information Society, 2019
It has long been debated whether the Open-Book-Open-Web exam was useful and efficient as the traditional closed book exams. Some scholars and practitioners have doubted the efficiency and the possibility of cheating in the OBOW as it is not directly monitored. This paper tends to investigate the effectiveness of OBOW exams by comparing them with…
Descriptors: Developing Nations, Test Format, Tests, Cheating
Peer reviewed Peer reviewed
Direct linkDirect link
Wise, Steven L. – Educational Measurement: Issues and Practice, 2017
The rise of computer-based testing has brought with it the capability to measure more aspects of a test event than simply the answers selected or constructed by the test taker. One behavior that has drawn much research interest is the time test takers spend responding to individual multiple-choice items. In particular, very short response…
Descriptors: Guessing (Tests), Multiple Choice Tests, Test Items, Reaction Time
Peer reviewed Peer reviewed
Direct linkDirect link
Kuo, Bor-Chen; Liao, Chen-Huei; Pai, Kai-Chih; Shih, Shu-Chuan; Li, Cheng-Hsuan; Mok, Magdalena Mo Ching – Educational Psychology, 2020
The current study explores students' collaboration and problem solving (CPS) abilities using a human-to-agent (H-A) computer-based collaborative problem solving assessment. Five CPS assessment units with 76 conversation-based items were constructed using the PISA 2015 CPS framework. In the experiment, 53,855 ninth and tenth graders in Taiwan were…
Descriptors: Computer Assisted Testing, Cooperative Learning, Problem Solving, Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Khoshsima, Hooshang; Hashemi Toroujeni, Seyyed Morteza; Thompson, Nathan; Reza Ebrahimi, Mohammad – Teaching English with Technology, 2019
The current study was conducted to investigate whether test scores of Iranian English as Foreign Language (EFL) learners were equivalent across CBT and PBT modes, with 58 intermediate learners studying at a private language academy located in Behshahr city in northern Iran. Moreover, test takers' computer familiarity, attitudes, aversion, and…
Descriptors: Computer Assisted Testing, English (Second Language), Second Language Learning, Second Language Instruction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Singh, Upasana Gitanjali; de Villiers, Mary Ruth – International Review of Research in Open and Distributed Learning, 2017
e-Assessment, in the form of tools and systems that deliver and administer multiple choice questions (MCQs), is used increasingly, raising the need for evaluation and validation of such systems. This research uses literature and a series of six empirical action research studies to develop an evaluation framework of categories and criteria called…
Descriptors: Computer Assisted Testing, Multiple Choice Tests, Test Selection, Action Research
Steedle, Jeffrey; Pashley, Peter; Cho, YoungWoo – ACT, Inc., 2020
Three mode comparability studies were conducted on the following Saturday national ACT test dates: October 26, 2019, December 14, 2019, and February 8, 2020. The primary goal of these studies was to evaluate whether ACT scores exhibited mode effects between paper and online testing that would necessitate statistical adjustments to the online…
Descriptors: Test Format, Computer Assisted Testing, College Entrance Examinations, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Ladyshewsky, Richard K. – Assessment & Evaluation in Higher Education, 2015
This research explores differences in multiple choice test (MCT) scores in a cohort of post-graduate students enrolled in a management and leadership course. A total of 250 students completed the MCT in either a supervised in-class paper and pencil test or an unsupervised online test. The only statistically significant difference between the nine…
Descriptors: Graduate Students, Multiple Choice Tests, Cheating, Scores
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Abidin, Aang Zainul; Istiyono, Edi; Fadilah, Nunung; Dwandaru, Wipsar Sunu Brams – International Journal of Evaluation and Research in Education, 2019
Classical assessments that are not comprehensive and do not distinguish students' initial abilities make measurement results far from the actual abilities. This study was conducted to produce a computerized adaptive test for physics critical thinking skills (CAT-PhysCriTS) that met the feasibility criteria. The test was presented for the physics…
Descriptors: Foreign Countries, High School Students, Grade 11, Physics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Daffin, Lee William, Jr.; Jones, Ashley A. – Online Learning, 2018
As online education becomes a more popular and permanent option for obtaining an education after high school, it also raises questions as to the academic rigor of such classes and the academic integrity of the students taking the classes. The purpose of the current study is to explore the integrity issue and to investigate student performance on…
Descriptors: College Students, Online Courses, Psychology, Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Nygren, Thomas; Guath, Mona – International Association for Development of the Information Society, 2018
In this study we investigate the abilities to determine credibility of digital news among 532 teenagers. Using an online test we assess to what extent teenagers are able to determine the credibility of different sources, evaluate credible and biased uses of evidence, and corroborate information. Many respondents fail to identify the credibility of…
Descriptors: Credibility, Information Sources, Information Literacy, News Reporting
Pages: 1  |  ...  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  12  |  ...  |  25