NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 2,416 to 2,430 of 9,530 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Mulligan, Neil W.; Smith, S. Adam; Spataro, Pietro – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2016
Stimuli co-occurring with targets in a detection task are better remembered than stimuli co-occurring with distractors--the attentional boost effect (ABE). The ABE is of interest because it is an exception to the usual finding that divided attention during encoding impairs memory. The effect has been demonstrated in tests of item memory but it is…
Descriptors: Memory, Attention, Recognition (Psychology), Priming
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Foley, Brett P. – Practical Assessment, Research & Evaluation, 2016
There is always a chance that examinees will answer multiple choice (MC) items correctly by guessing. Design choices in some modern exams have created situations where guessing at random through the full exam--rather than only for a subset of items where the examinee does not know the answer--can be an effective strategy to pass the exam. This…
Descriptors: Guessing (Tests), Multiple Choice Tests, Case Studies, Test Construction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Park, Mihwa; Johnson, Joseph A. – International Journal of Environmental and Science Education, 2016
While significant research has been conducted on students' conceptions of energy, alternative conceptions of energy have not been actively explored in the area of environmental science. The purpose of this study is to examine students' alternative conceptions in the environmental science discipline through the analysis of responses of first year…
Descriptors: Environmental Education, Multiple Choice Tests, Test Items, Energy
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Huda, Nizlel; Subanji; Nusantar, Toto; Susiswo; Sutawidjaja, Akbar; Rahardjo, Swasono – Educational Research and Reviews, 2016
This study aimed to determine students' metacognitive failure in Mathematics Education Program of FKIP in Jambi University investigated based on assimilation and accommodation Mathematical framework. There were 35 students, five students did not answer the question, three students completed the questions correctly and 27 students tried to solve…
Descriptors: Metacognition, Mathematics Education, Problem Solving, Qualitative Research
Peer reviewed Peer reviewed
Direct linkDirect link
Ryan, Ève; Brunfaut, Tineke – Language Assessment Quarterly, 2016
It is not unusual for tests in less-commonly taught languages (LCTLs) to be developed by an experienced item writer with no proficiency in the language being tested, in collaboration with a language informant who is a speaker of the target language, but lacks language assessment expertise. How this approach to item writing works in practice, and…
Descriptors: Language Tests, Uncommonly Taught Languages, Test Construction, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Suh, Youngsuk – Journal of Educational Measurement, 2016
This study adapted an effect size measure used for studying differential item functioning (DIF) in unidimensional tests and extended the measure to multidimensional tests. Two effect size measures were considered in a multidimensional item response theory model: signed weighted P-difference and unsigned weighted P-difference. The performance of…
Descriptors: Effect Size, Goodness of Fit, Statistical Analysis, Statistical Significance
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Volov, Vyacheslav T.; Gilev, Alexander A. – International Journal of Environmental and Science Education, 2016
In today's item response theory (IRT) the response to the test item is considered as a probability event depending on the student's ability and difficulty of items. It is noted that in the scientific literature there is very little agreement about how to determine factors affecting the item difficulty. It is suggested that the difficulty of the…
Descriptors: Item Response Theory, Test Items, Difficulty Level, Science Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Moothedath, Shana; Chaporkar, Prasanna; Belur, Madhu N. – Perspectives in Education, 2016
In recent years, the computerised adaptive test (CAT) has gained popularity over conventional exams in evaluating student capabilities with desired accuracy. However, the key limitation of CAT is that it requires a large pool of pre-calibrated questions. In the absence of such a pre-calibrated question bank, offline exams with uncalibrated…
Descriptors: Guessing (Tests), Computer Assisted Testing, Adaptive Testing, Maximum Likelihood Statistics
Pawade, Yogesh R.; Diwase, Dipti S. – Journal of Educational Technology, 2016
Item analysis of Multiple Choice Questions (MCQs) is the process of collecting, summarizing and utilizing information from students' responses to evaluate the quality of test items. Difficulty Index (p-value), Discrimination Index (DI) and Distractor Efficiency (DE) are the parameters which help to evaluate the quality of MCQs used in an…
Descriptors: Test Items, Item Analysis, Multiple Choice Tests, Curriculum Development
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Qudah, Ahmad Hassan – Journal of Education and Practice, 2016
The research aims to reveal the specific way to evaluate learning mathematics, so that we get the "measuring tool" for the achievement of learners in mathematics that reflect their level of understanding by score (mark), which we trust it with high degree. The behavior of the learner can be measured by a professional way to build the…
Descriptors: Mathematics Instruction, Mathematics Teachers, Student Evaluation, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Zhang, Tan; Chen, Ang – AERA Online Paper Repository, 2016
Based on the Job Demands-Resources model, the study developed and validated an instrument that measures physical education teachers' job demands/resources perception. Expert review established content validity with the average item rating of 3.6/5.0. Construct validity and reliability were determined with a teacher sample (n=193). Exploratory…
Descriptors: Physical Education Teachers, Teaching Load, Resources, Measures (Individuals)
Peer reviewed Peer reviewed
Direct linkDirect link
Schindler, Julia; Richter, Tobias; Isberner, Maj-Britt; Naumann, Johannes; Neeb, Yvonne – Language Assessment Quarterly, 2018
Reading comprehension is based on the efficient accomplishment of several cognitive processes at the word, sentence, and text level. To the extent that each of these processes contributes to reading comprehension, it can cause reading difficulties if it is deficient. To identify individual sources of reading difficulties, tools are required that…
Descriptors: Construct Validity, Language Tests, Grammar, Task Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Yang, Ji Seung; Zheng, Xiaying – Journal of Educational and Behavioral Statistics, 2018
The purpose of this article is to introduce and review the capability and performance of the Stata item response theory (IRT) package that is available from Stata v.14, 2015. Using a simulated data set and a publicly available item response data set extracted from Programme of International Student Assessment, we review the IRT package from…
Descriptors: Item Response Theory, Item Analysis, Computer Software, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Traxler, Adrienne; Henderson, Rachel; Stewart, John; Stewart, Gay; Papak, Alexis; Lindell, Rebecca – Physical Review Physics Education Research, 2018
Research on the test structure of the Force Concept Inventory (FCI) has largely ignored gender, and research on FCI gender effects (often reported as "gender gaps") has seldom interrogated the structure of the test. These rarely crossed streams of research leave open the possibility that the FCI may not be structurally valid across…
Descriptors: Physics, Science Instruction, Sex Fairness, Gender Differences
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Ming; Rus, Vasile; Liu, Li – IEEE Transactions on Learning Technologies, 2018
Automatic question generation can help teachers to save the time necessary for constructing examination papers. Several approaches were proposed to automatically generate multiple-choice questions for vocabulary assessment or grammar exercises. However, most of these studies focused on generating questions in English with a certain similarity…
Descriptors: Multiple Choice Tests, Regression (Statistics), Test Items, Natural Language Processing
Pages: 1  |  ...  |  158  |  159  |  160  |  161  |  162  |  163  |  164  |  165  |  166  |  ...  |  636