NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)21
Since 2006 (last 20 years)44
Audience
Researchers1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 52 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sunbul, Onder; Yormaz, Seha – International Journal of Evaluation and Research in Education, 2018
In this study Type I Error and the power rates of omega (?) and GBT (generalized binomial test) indices were investigated for several nominal alpha levels and for 40 and 80-item test lengths with 10,000-examinee sample size under several test level restrictions. As a result, Type I error rates of both indices were found to be below the acceptable…
Descriptors: Difficulty Level, Cheating, Duplication, Test Length
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sunbul, Onder; Yormaz, Seha – Eurasian Journal of Educational Research, 2018
Purpose: Several studies can be found in the literature that investigate the performance of ? under various conditions. However no study for the effects of item difficulty, item discrimination, and ability restrictions on the performance of ? could be found. The current study aims to investigate the performance of ? for the conditions given below.…
Descriptors: Test Items, Difficulty Level, Ability, Cheating
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kalkan, Ömür Kaya; Kara, Yusuf; Kelecioglu, Hülya – International Journal of Assessment Tools in Education, 2018
Missing data is a common problem in datasets that are obtained by administration of educational and psychological tests. It is widely known that existence of missing observations in data can lead to serious problems such as biased parameter estimates and inflation of standard errors. Most of the missing data imputation methods are focused on…
Descriptors: Item Response Theory, Statistical Analysis, Data, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Young, Nicholas T.; Heckler, Andrew F. – Physical Review Physics Education Research, 2018
In the context of a generic harmonic oscillator, we investigated students' accuracy in determining the period, frequency, and angular frequency from mathematical and graphical representations. In a series of studies including interviews, free response tests, and multiple-choice tests developed in an iterative process, we assessed students in both…
Descriptors: Interviews, Accuracy, Multiple Choice Tests, Algebra
Peer reviewed Peer reviewed
Direct linkDirect link
Machida, Keitaro; Chin, Michelle; Johnson, Katherine A. – Active Learning in Higher Education, 2018
To optimize learning in lectures, students need to maintain a sustained level of attention to the lecture material. Previous research has suggested, however, that student attention declines over the course of the lecture. One strategy suggested to improve sustained attention of students during the lecture is to encourage note-taking by students.…
Descriptors: Notetaking, Attention, Lecture Method, Learner Engagement
Pawade, Yogesh R.; Diwase, Dipti S. – Journal of Educational Technology, 2016
Item analysis of Multiple Choice Questions (MCQs) is the process of collecting, summarizing and utilizing information from students' responses to evaluate the quality of test items. Difficulty Index (p-value), Discrimination Index (DI) and Distractor Efficiency (DE) are the parameters which help to evaluate the quality of MCQs used in an…
Descriptors: Test Items, Item Analysis, Multiple Choice Tests, Curriculum Development
Peer reviewed Peer reviewed
Direct linkDirect link
Quaigrain, Kennedy; Arhin, Ato Kwamina – Cogent Education, 2017
Item analysis is essential in improving items which will be used again in later tests; it can also be used to eliminate misleading items in a test. The study focused on item and test quality and explored the relationship between difficulty index (p-value) and discrimination index (DI) with distractor efficiency (DE). The study was conducted among…
Descriptors: Item Analysis, Teacher Developed Materials, Test Reliability, Educational Assessment
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Rahman, Taslima; Mislevy, Robert J. – ETS Research Report Series, 2017
To demonstrate how methodologies for assessing reading comprehension can grow out of views of the construct suggested in the reading research literature, we constructed tasks and carried out psychometric analyses that were framed in accordance with 2 leading reading models. In estimating item difficulty and subsequently, examinee proficiency, an…
Descriptors: Reading Tests, Reading Comprehension, Psychometrics, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Pachai, Matthew V.; DiBattista, David; Kim, Joseph A. – Canadian Journal for the Scholarship of Teaching and Learning, 2015
Multiple choice writing guidelines are decidedly split on the use of "none of the above" (NOTA), with some authors discouraging and others advocating its use. Moreover, empirical studies of NOTA have produced mixed results. Generally, these studies have utilized NOTA as either the correct response or a distractor and assessed its effect…
Descriptors: Multiple Choice Tests, Test Items, Introductory Courses, Psychology
Peer reviewed Peer reviewed
Direct linkDirect link
Andrich, David; Marais, Ida; Humphry, Stephen Mark – Educational and Psychological Measurement, 2016
Recent research has shown how the statistical bias in Rasch model difficulty estimates induced by guessing in multiple-choice items can be eliminated. Using vertical scaling of a high-profile national reading test, it is shown that the dominant effect of removing such bias is a nonlinear change in the unit of scale across the continuum. The…
Descriptors: Guessing (Tests), Statistical Bias, Item Response Theory, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Krell, Moritz – Cogent Education, 2017
This study evaluates a 12-item instrument for subjective measurement of mental load (ML) and mental effort (ME) by analysing different sources of validity evidence. The findings of an expert judgement (N = 8) provide "evidence based on test content" that the formulation of the items corresponds to the meaning of ML and ME. An empirical…
Descriptors: Cognitive Processes, Test Validity, Secondary School Students, Multiple Choice Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sener, Nilay; Tas, Erol – Journal of Education and Learning, 2017
The purpose of this study is to prepare a multiple-choice achievement test with high reliability and validity for the "Let's Solve the Puzzle of Our Body" unit. For this purpose, a multiple choice achievement test consisting of 46 items was applied to 178 fifth grade students in total. As a result of the test and material analysis…
Descriptors: Achievement Tests, Grade 5, Science Instruction, Biology
Peer reviewed Peer reviewed
Direct linkDirect link
Goncher, Andrea M.; Jayalath, Dhammika; Boles, Wageeh – IEEE Transactions on Education, 2016
Concept inventory tests are one method to evaluate conceptual understanding and identify possible misconceptions. The multiple-choice question format, offering a choice between a correct selection and common misconceptions, can provide an assessment of students' conceptual understanding in various dimensions. Misconceptions of some engineering…
Descriptors: Case Studies, Concept Formation, Teaching Methods, Misconceptions
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Abdulwahid, Muntaha A.; Hamzah, Zaitul Azma Binti Zainon; Hajimaming, Pabiyah; Alkhawaja, Hussein W. – International Journal of Education and Literacy Studies, 2017
Legal translation of contract agreements is a challenge to translators as it involves combining the literary translation with the technical terminological precision. In translating legal contract agreements, a legal translator must utilize the lexical or syntactic precision and, more importantly, the pragmatic awareness of the context. This will…
Descriptors: Translation, Phrase Structure, English (Second Language), Second Language Learning
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Koçdar, Serpil; Karadag, Nejdet; Sahin, Murat Dogan – Turkish Online Journal of Educational Technology - TOJET, 2016
This is a descriptive study which intends to determine whether the difficulty and discrimination indices of the multiple-choice questions show differences according to cognitive levels of the Bloom's Taxonomy, which are used in the exams of the courses in a business administration bachelor's degree program offered through open and distance…
Descriptors: Multiple Choice Tests, Difficulty Level, Distance Education, Open Education
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4