NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 16 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Bieleke, Maik; Goetz, Thomas; Krannich, Maike; Roos, Anna-Lena; Yanagida, Takuya – Journal of Experimental Education, 2023
Tests in educational contexts often start with easy tasks, assuming that this fosters positive experiences--a sense of control, higher valuing of the test, and more positive and less negative emotions. Although intuitive and widespread, this assumption lacks an empirical basis and a theoretical framework. We conducted a field experiment and…
Descriptors: Foreign Countries, Secondary School Students, Mathematics Tests, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Shin, Jinnie; Bulut, Okan; Gierl, Mark J. – Journal of Experimental Education, 2020
The arrangement of response options in multiple-choice (MC) items, especially the location of the most attractive distractor, is considered critical in constructing high-quality MC items. In the current study, a sample of 496 undergraduate students taking an educational assessment course was given three test forms consisting of the same items but…
Descriptors: Foreign Countries, Undergraduate Students, Multiple Choice Tests, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Xiaolin; Svetina, Dubravka; Dai, Shenghai – Journal of Experimental Education, 2019
Recently, interest in test subscore reporting for diagnosis purposes has been growing rapidly. The two simulation studies here examined factors (sample size, number of subscales, correlation between subscales, and three factors affecting subscore reliability: number of items per subscale, item parameter distribution, and data generating model)…
Descriptors: Value Added Models, Scores, Sample Size, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Roelle, Julian; Roelle, Detlev; Berthold, Kirsten – Journal of Experimental Education, 2019
Providing test questions after an initial study phase is a common instructional technique. In theory, questions that require higher-level (deep) processing should be more beneficial than those that require lower-level (shallow) processing. However, empirical evidence on the matter is inconsistent. To shed light on two potential reasons for these…
Descriptors: Testing Problems, Test Items, Cognitive Processes, Problem Based Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Arce-Ferrer, Alvaro J.; Bulut, Okan – Journal of Experimental Education, 2019
This study investigated the performance of four widely used data-collection designs in detecting test-mode effects (i.e., computer-based versus paper-based testing). The experimental conditions included four data-collection designs, two test-administration modes, and the availability of an anchor assessment. The test-level and item-level results…
Descriptors: Data Collection, Test Construction, Test Format, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
DiBattista, David; Sinnige-Egger, Jo-Anne; Fortuna, Glenda – Journal of Experimental Education, 2014
The authors assessed the effects of using "none of the above" as an option in a 40-item, general-knowledge multiple-choice test administered to undergraduate students. Examinees who selected "none of the above" were given an incentive to write the correct answer to the question posed. Using "none of the above" as the…
Descriptors: Multiple Choice Tests, Testing, Undergraduate Students, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Alexander, Patricia A.; Dumas, Denis; Grossnickle, Emily M.; List, Alexandra; Firetto, Carla M. – Journal of Experimental Education, 2016
Relational reasoning is the foundational cognitive ability to discern meaningful patterns within an informational stream, but its reliable and valid measurement remains problematic. In this investigation, the measurement of relational reasoning unfolded in three stages. Stage 1 entailed the establishment of a research-based conceptualization of…
Descriptors: Cognitive Ability, Logical Thinking, Thinking Skills, Cognitive Processes
Peer reviewed Peer reviewed
Plake, Barbara S. – Journal of Experimental Education, 1980
Three-item orderings and two levels of knowledge of ordering were used to study differences in test results, student's perception of the test's fairness and difficulty, and student's estimation of test performance. No significant order effect was found. (Author/GK)
Descriptors: Difficulty Level, Higher Education, Scores, Test Format
Peer reviewed Peer reviewed
Vidler, Derek; Hansen, Richard – Journal of Experimental Education, 1980
Relationships among patterns of answer changing and item characteristics on multiple-choice tests are discussed. Results obtained were similar to those found in previous studies but pointed to further relationships among these variables. (Author/GK)
Descriptors: College Students, Difficulty Level, Higher Education, Multiple Choice Tests
Peer reviewed Peer reviewed
Weiten, Wayne – Journal of Experimental Education, 1982
A comparison of double as opposed to single multiple-choice questions yielded significant differences in regard to item difficulty, item discrimination, and internal reliability, but not concurrent validity. (Author/PN)
Descriptors: Difficulty Level, Educational Testing, Higher Education, Multiple Choice Tests
Peer reviewed Peer reviewed
Davey, Beth – Journal of Experimental Education, 1988
The contribution of passage variables, question types, and format variables to reading comprehension was assessed for 50 successful and 50 unsuccessful readers. A three-stage conditional regression assessed the predictability of 20 predictor features on item difficulty scores. The location of response information and stem length accounted for…
Descriptors: Comparative Analysis, Difficulty Level, Elementary Secondary Education, Predictor Variables
Peer reviewed Peer reviewed
Weiten, Wayne – Journal of Experimental Education, 1984
The effects of violating four item construction principles were examined to assess the validity of the principles and the importance of students' test wiseness. While flawed items were significantly less difficult than sound items, differences in item discrimination, test reliability, and concurrent validity were not observed. (Author/BW)
Descriptors: Difficulty Level, Higher Education, Item Analysis, Multiple Choice Tests
Peer reviewed Peer reviewed
Plake, Barbara S.; And Others – Journal of Experimental Education, 1981
Number right and elimination scores were analyzed on a college level mathematics exam assembled from pretest data. Anxiety measures were administered along with the experimental forms to undergraduates. Results suggest that neither test scores nor attitudes are influenced by item order knowledge thereof, or anxiety level. (Author/GK)
Descriptors: College Mathematics, Difficulty Level, Higher Education, Multiple Choice Tests
Peer reviewed Peer reviewed
Hsu, Tse-Chi; And Others – Journal of Experimental Education, 1984
The indices of item difficulty and discrimination, the coefficients of effective length, and the average item information for both single- and multiple-answer items using six different scoring formulas were computed and compared. These formulas vary in terms of the assignment of partial credit and the correction for guessing. (Author/BW)
Descriptors: College Entrance Examinations, Comparative Analysis, Difficulty Level, Guessing (Tests)
Peer reviewed Peer reviewed
Klimko, Ivan P. – Journal of Experimental Education, 1984
The influence of item arrangement on students' total test performance was investigated. Two hierarchical multiple regression analyses were used to analyze the data. The main finding within the context of this study was that item arrangements based on item difficulties did not influence achievement examination performance. (Author/DWH)
Descriptors: Achievement Tests, Cognitive Style, College Students, Difficulty Level
Previous Page | Next Page ยป
Pages: 1  |  2