NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 15 results Save | Export
Jeff Allen; Jay Thomas; Stacy Dreyer; Scott Johanningmeier; Dana Murano; Ty Cruce; Xin Li; Edgar Sanchez – ACT Education Corp., 2025
This report describes the process of developing and validating the enhanced ACT. The report describes the changes made to the test content and the processes by which these design decisions were implemented. The authors describe how they shared the overall scope of the enhancements, including the initial blueprints, with external expert panels,…
Descriptors: College Entrance Examinations, Testing, Change, Test Construction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
McGuire, Michael J. – International Journal for the Scholarship of Teaching and Learning, 2023
College students in a lower-division psychology course made metacognitive judgments by predicting and postdicting performance for true-false, multiple-choice, and fill-in-the-blank question sets on each of three exams. This study investigated which question format would result in the most accurate metacognitive judgments. Extending Koriat's (1997)…
Descriptors: Metacognition, Multiple Choice Tests, Accuracy, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Hung-Yu – Educational and Psychological Measurement, 2023
The forced-choice (FC) item formats used for noncognitive tests typically develop a set of response options that measure different traits and instruct respondents to make judgments among these options in terms of their preference to control the response biases that are commonly observed in normative tests. Diagnostic classification models (DCMs)…
Descriptors: Test Items, Classification, Bayesian Statistics, Decision Making
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Fadillah, Sarah Meilani; Ha, Minsu; Nuraeni, Eni; Indriyanti, Nurma Yunita – Malaysian Journal of Learning and Instruction, 2023
Purpose: Researchers discovered that when students were given the opportunity to change their answers, a majority changed their responses from incorrect to correct, and this change often increased the overall test results. What prompts students to modify their answers? This study aims to examine the modification of scientific reasoning test, with…
Descriptors: Science Tests, Multiple Choice Tests, Test Items, Decision Making
Peer reviewed Peer reviewed
Direct linkDirect link
Arslan, Burcu; Jiang, Yang; Keehner, Madeleine; Gong, Tao; Katz, Irvin R.; Yan, Fred – Educational Measurement: Issues and Practice, 2020
Computer-based educational assessments often include items that involve drag-and-drop responses. There are different ways that drag-and-drop items can be laid out and different choices that test developers can make when designing these items. Currently, these decisions are based on experts' professional judgments and design constraints, rather…
Descriptors: Test Items, Computer Assisted Testing, Test Format, Decision Making
Bronson Hui – ProQuest LLC, 2021
Vocabulary researchers have started expanding their assessment toolbox by incorporating timed tasks and psycholinguistic instruments (e.g., priming tasks) to gain insights into lexical development (e.g., Elgort, 2011; Godfroid, 2020b; Nakata & Elgort, 2020; Vandenberghe et al., 2021). These timed sensitive and implicit word measures differ…
Descriptors: Measures (Individuals), Construct Validity, Decision Making, Vocabulary Development
Peer reviewed Peer reviewed
Direct linkDirect link
Morgan, Grant B.; Moore, Courtney A.; Floyd, Harlee S. – Journal of Psychoeducational Assessment, 2018
Although content validity--how well each item of an instrument represents the construct being measured--is foundational in the development of an instrument, statistical validity is also important to the decisions that are made based on the instrument. The primary purpose of this study is to demonstrate how simulation studies can be used to assist…
Descriptors: Simulation, Decision Making, Test Construction, Validity
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Wright, Christian D.; Huang, Austin L.; Cooper, Katelyn M.; Brownell, Sara E. – International Journal for the Scholarship of Teaching and Learning, 2018
College instructors in the United States usually make their own decisions about how to design course exams. Even though summative course exams are well known to be important to student success, we know little about the decision making of instructors when designing course exams. To probe how instructors design exams for introductory biology, we…
Descriptors: College Faculty, Science Teachers, Science Tests, Teacher Made Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Williams, Marian E.; Sando, Lara; Soles, Tamara Glen – Journal of Psychoeducational Assessment, 2014
Cognitive assessment of young children contributes to high-stakes decisions because results are often used to determine eligibility for early intervention and special education. Previous reviews of cognitive measures for young children highlighted concerns regarding adequacy of standardization samples, steep item gradients, and insufficient floors…
Descriptors: Intelligence Tests, Decision Making, High Stakes Tests, Eligibility
Peer reviewed Peer reviewed
Direct linkDirect link
Plassmann, Sibylle; Zeidler, Beate – Language Learning in Higher Education, 2014
Language testing means taking decisions: about the test taker's results, but also about the test construct and the measures taken in order to ensure quality. This article takes the German test "telc Deutsch C1 Hochschule" as an example to illustrate this decision-making process in an academic context. The test is used for university…
Descriptors: Language Tests, Test Wiseness, Test Construction, Decision Making
Mizokawa, Donald T.; Hamlin, Michael D. – Educational Technology, 1984
Suggestions for software design in computer managed testing (CMT) cover instructions to testees, their physical format, provision of practice items, and time limit information; test item presentation, physical format, discussion of task demands, review capabilities, and rate of presentation; pedagogically helpful utilities; typefonts; vocabulary;…
Descriptors: Computer Assisted Testing, Decision Making, Guidelines, Test Construction
Peer reviewed Peer reviewed
Wilcox, Rand R.; And Others – Journal of Educational Measurement, 1988
The second response conditional probability model of decision-making strategies used by examinees answering multiple choice test items was revised. Increasing the number of distractors or providing distractors giving examinees (N=106) the option to follow the model improved results and gave a good fit to data for 29 of 30 items. (SLD)
Descriptors: Cognitive Tests, Decision Making, Mathematical Models, Multiple Choice Tests
Peer reviewed Peer reviewed
Case, Susan M.; Swanson, David B. – Teaching and Learning in Medicine, 1993
Extended matching, a test item format used currently in medical licensing examinations, is described. Procedures for writing and reviewing such test items are outlined, test development and psychometric advantages are discussed, and issues in test administration and scoring are examined. The extended matching form is also seen as having uses for…
Descriptors: Clinical Diagnosis, Decision Making, Higher Education, Licensing Examinations (Professions)
Buser, Karen – 1996
Most seasoned test developers recognize the importance of thoughtful decision making when constructing a test. Unfortunately, many classroom achievement tests are created by novice test developed who have not received sufficient instruction in item writing (G. Gulliksen, 1986; R. J. Stiggins, 1991). The result is often a test that is poorly…
Descriptors: Achievement Tests, Decision Making, Educational Planning, Evaluation Methods
Smith, Robert L.; Carlson, Alfred B. – 1995
The feasibility of constructing test forms with practically equivalent cut scores using judges' estimates of item difficulty as target "statistical" specifications was investigated. Test forms with equivalent judgmental cut scores (based on judgments of item difficulty) were assembled using items from six operational forms of the…
Descriptors: Cutting Scores, Decision Making, Difficulty Level, Equated Scores