NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 23 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Javier Del Olmo-Muñoz; Pascual D. Diago; David Arnau; David Arnau-Blasco; José Antonio González-Calero – ZDM: Mathematics Education, 2024
This research, following a sequential mixed-methods design, delves into metacognitive control in problem solving among 5- to 6-year-olds, using two floor-robot environments. In an initial qualitative phase, 82 pupils participated in tasks in which they directed a floor robot to one of two targets, with the closer target requiring more cognitive…
Descriptors: Elementary School Students, Metacognition, Robotics, Computer Simulation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Agus Santoso; Heri Retnawati; Timbul Pardede; Ibnu Rafi; Munaya Nikma Rosyada; Gulzhaina K. Kassymova; Xu Wenxin – Practical Assessment, Research & Evaluation, 2024
The test blueprint is important in test development, where it guides the test item writer in creating test items according to the desired objectives and specifications or characteristics (so-called a priori item characteristics), such as the level of item difficulty in the category and the distribution of items based on their difficulty level.…
Descriptors: Foreign Countries, Undergraduate Students, Business English, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Lahner, Felicitas-Maria; Lörwald, Andrea Carolin; Bauer, Daniel; Nouns, Zineb Miriam; Krebs, René; Guttormsen, Sissel; Fischer, Martin R.; Huwendiek, Sören – Advances in Health Sciences Education, 2018
Multiple true-false (MTF) items are a widely used supplement to the commonly used single-best answer (Type A) multiple choice format. However, an optimal scoring algorithm for MTF items has not yet been established, as existing studies yielded conflicting results. Therefore, this study analyzes two questions: What is the optimal scoring algorithm…
Descriptors: Scoring Formulas, Scoring Rubrics, Objective Tests, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Hongli; Suen, Hoi K. – International Multilingual Research Journal, 2015
This study examines how Chinese ESL learners recognize English words while responding to a multiple-choice reading test as compared to Romance-language-speaking ESL learners. Four adult Chinese ESL learners and three adult Romance-language-speaking ESL learners participated in a think-aloud study with the Michigan English Language Assessment…
Descriptors: Chinese, English (Second Language), English Language Learners, Romance Languages
Peer reviewed Peer reviewed
Direct linkDirect link
Wall, Jeffrey D.; Knapp, Janice – Journal of Information Systems Education, 2014
Learning technical computing skills is increasingly important in our technology driven society. However, learning technical skills in information systems (IS) courses can be difficult. More than 20 percent of students in some technical courses may dropout or fail. Unfortunately, little is known about students' perceptions of the difficulty of…
Descriptors: Undergraduate Students, Information Systems, Grounded Theory, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Laprise, Shari L. – College Teaching, 2012
Successful exam composition can be a difficult task. Exams should not only assess student comprehension, but be learning tools in and of themselves. In a biotechnology course delivered to nonmajors at a business college, objective multiple-choice test questions often require students to choose the exception or "not true" choice. Anecdotal student…
Descriptors: Feedback (Response), Test Items, Multiple Choice Tests, Biotechnology
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Jianjun – School Science and Mathematics, 2011
As the largest international study ever taken in history, the Trend in Mathematics and Science Study (TIMSS) has been held as a benchmark to measure U.S. student performance in the global context. In-depth analyses of the TIMSS project are conducted in this study to examine key issues of the comparative investigation: (1) item flaws in mathematics…
Descriptors: Test Items, Figurative Language, Item Response Theory, Benchmarking
Peer reviewed Peer reviewed
Direct linkDirect link
Vuk, Jasna; Morse, David T. – Research in the Schools, 2009
In this study we observed college students' behavior on two self-tailored, multiple-choice exams. Self-tailoring was defined as an option to omit up to five items from being scored on an exam. Participants, 80 undergraduate college students enrolled in two sections of an educational psychology course, statistically significantly improved their…
Descriptors: College Students, Educational Psychology, Academic Achievement, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Beigneux, Katia; Plaie, Thierry; Isingrini, Michel – International Journal of Aging and Human Development, 2007
The aim of this study was to evaluate the effect of aging on the storage of visual and spatial working memory according to Logie's model of working memory (1995). In a first experiment young, elderly, and very old subjects carried out two tasks usually used to measure visual span (Visual Patterns Test) and spatial span (Corsi Block Tapping test).…
Descriptors: Memory, Spatial Ability, Aging Education, Psychometrics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ward, Chris; Yates, Dan; Song, Joon – American Journal of Business Education, 2009
This study examined the extent to which student engagement is associated with a traditional assessment of student knowledge. In this study, ETS Business Major Field Test (MFT) scores were compared to student's self-reported survey responses to specific questions on the National Survey of Student Engagement (NSSE). Areas of the NSSE survey such as…
Descriptors: Pilot Projects, Learner Engagement, Business, Business Skills
Peer reviewed Peer reviewed
Rocklin, Thomas; O'Donnell, Angela M. – Journal of Educational Psychology, 1987
An experiment was conducted that contrasted a variant of computerized adaptive testing, self-adapted testing, with two traditional tests. Participants completed a self-report of text anxiety and were randomly assigned to take one of the three tests of verbal ability. Subjects generally chose more difficult items as the test progressed. (Author/LMO)
Descriptors: Adaptive Testing, Comparative Testing, Computer Assisted Testing, Difficulty Level
Peer reviewed Peer reviewed
Wahlstrom, Merlin; And Others – Canadian Journal of Education, 1986
An important aspect of Ontario's participation in the Second International Study of Mathematics was a comparative analysis of students' mathematics achievement from 1968 to 1982. Achievement levels remained remarkably constant. The problem of declining achievement in the United States was not apparent in this analysis of Ontario students. (LMO)
Descriptors: Achievement Tests, Comparative Testing, Difficulty Level, Foreign Countries
Peer reviewed Peer reviewed
Frary, Robert B. – Applied Measurement in Education, 1991
The use of the "none-of-the-above" option (NOTA) in 20 college-level multiple-choice tests was evaluated for classes with 100 or more students. Eight academic disciplines were represented, and 295 NOTA and 724 regular test items were used. It appears that the NOTA can be compatible with good classroom measurement. (TJH)
Descriptors: College Students, Comparative Testing, Difficulty Level, Discriminant Analysis
Peer reviewed Peer reviewed
Crehan, Kevin D.; And Others – Educational and Psychological Measurement, 1993
Studies with 220 college students found that multiple-choice test items with 3 items are more difficult than those with 4 items, and items with the none-of-these option are more difficult than those without this option. Neither format manipulation affected item discrimination. Implications for test construction are discussed. (SLD)
Descriptors: College Students, Comparative Testing, Difficulty Level, Distractors (Tests)
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Yuan H.; Lissitz, Robert W. – Journal of Educational Measurement, 2004
The analytically derived asymptotic standard errors (SEs) of maximum likelihood (ML) item estimates can be approximated by a mathematical function without examinees' responses to test items, and the empirically determined SEs of marginal maximum likelihood estimation (MMLE)/Bayesian item estimates can be obtained when the same set of items is…
Descriptors: Test Items, Computation, Item Response Theory, Error of Measurement
Previous Page | Next Page »
Pages: 1  |  2