Publication Date
In 2025 | 1 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 8 |
Since 2016 (last 10 years) | 12 |
Since 2006 (last 20 years) | 17 |
Descriptor
Decision Making | 23 |
Test Items | 23 |
Models | 19 |
Item Response Theory | 9 |
Item Analysis | 8 |
Test Construction | 7 |
Foreign Countries | 6 |
Mathematical Models | 5 |
Accuracy | 4 |
Computer Assisted Testing | 4 |
Multiple Choice Tests | 4 |
More ▼ |
Source
Author
Wilcox, Rand R. | 2 |
Adadan, Emine | 1 |
Akaygun, Sevil | 1 |
Armstrong, Ronald D. | 1 |
Balota, David A. | 1 |
Beller, Michal | 1 |
Ben-Eliyahu, Einat | 1 |
Bramley, Tom | 1 |
Chun Wang | 1 |
Cook, Robert J. | 1 |
Coxe, Stefany | 1 |
More ▼ |
Publication Type
Journal Articles | 15 |
Reports - Research | 15 |
Reports - Evaluative | 4 |
Speeches/Meeting Papers | 3 |
Reports - Descriptive | 2 |
Dissertations/Theses -… | 1 |
Tests/Questionnaires | 1 |
Education Level
Audience
Students | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Hopkins Symptom Checklist | 1 |
International English… | 1 |
National Assessment of… | 1 |
Stages of Concern… | 1 |
What Works Clearinghouse Rating
Kent Anderson Seidel – School Leadership Review, 2025
This paper examines one of three central diagnostic tools of the Concerns Based Adoption Model, the Stages of Concern Questionnaire (SoCQ). The SoCQ was developed with a focus on K12 education. It has been used widely since developed in 1973, in early childhood, higher education, medical, business, community, and military settings. The SoCQ…
Descriptors: Questionnaires, Educational Change, Educational Innovation, Intervention
Ö. Emre C. Alagöz; Thorsten Meiser – Educational and Psychological Measurement, 2024
To improve the validity of self-report measures, researchers should control for response style (RS) effects, which can be achieved with IRTree models. A traditional IRTree model considers a response as a combination of distinct decision-making processes, where the substantive trait affects the decision on response direction, while decisions about…
Descriptors: Item Response Theory, Validity, Self Evaluation (Individuals), Decision Making
Schweizer, Karl; Wang, Tengfei; Ren, Xuezhu – Journal of Experimental Education, 2022
The essay reports two studies on confirmatory factor analysis of speeded data with an effect of selective responding. This response strategy leads test takers to choose their own working order instead of completing the items along with the given order. Methods for detecting speededness despite such a deviation from the given order are proposed and…
Descriptors: Factor Analysis, Response Style (Tests), Decision Making, Test Items
Huang, Hung-Yu – Educational and Psychological Measurement, 2023
The forced-choice (FC) item formats used for noncognitive tests typically develop a set of response options that measure different traits and instruct respondents to make judgments among these options in terms of their preference to control the response biases that are commonly observed in normative tests. Diagnostic classification models (DCMs)…
Descriptors: Test Items, Classification, Bayesian Statistics, Decision Making
Zhao, Xin; Coxe, Stefany; Sibley, Margaret H.; Zulauf-McCurdy, Courtney; Pettit, Jeremy W. – Prevention Science, 2023
There has been increasing interest in applying integrative data analysis (IDA) to analyze data across multiple studies to increase sample size and statistical power. Measures of a construct are frequently not consistent across studies. This article provides a tutorial on the complex decisions that occur when conducting harmonization of measures…
Descriptors: Data Analysis, Sample Size, Decision Making, Test Items
Sansom, Rebecca L.; Suh, Erica; Plummer, Kenneth J. – Journal of Chemical Education, 2019
Heat and enthalpy are challenging topics for general chemistry students because they are conceptually complex and require avariety of quantitative problem-solving approaches. Expert chemists draw on conditional knowledge, deciding when and under what conditions a certain problem-solving approach should be used. Decision-based learning (DBL)…
Descriptors: Decision Making, Problem Solving, Science Instruction, Chemistry
Joo, Seang-Hwane; Lee, Philseok; Stark, Stephen – Journal of Educational Measurement, 2018
This research derived information functions and proposed new scalar information indices to examine the quality of multidimensional forced choice (MFC) items based on the RANK model. We also explored how GGUM-RANK information, latent trait recovery, and reliability varied across three MFC formats: pairs (two response alternatives), triplets (three…
Descriptors: Item Response Theory, Models, Item Analysis, Reliability
Langbeheim, Elon; Ben-Eliyahu, Einat; Adadan, Emine; Akaygun, Sevil; Ramnarain, Umesh Dewnarain – Chemistry Education Research and Practice, 2022
Learning progressions (LPs) are novel models for the development of assessments in science education, that often use a scale to categorize students' levels of reasoning. Pictorial representations are important in chemistry teaching and learning, and also in LPs, but the differences between pictorial and verbal items in chemistry LPs is unclear. In…
Descriptors: Science Instruction, Learning Trajectories, Chemistry, Thinking Skills
Jing Lu; Chun Wang; Ningzhong Shi – Grantee Submission, 2023
In high-stakes, large-scale, standardized tests with certain time limits, examinees are likely to engage in either one of the three types of behavior (e.g., van der Linden & Guo, 2008; Wang & Xu, 2015): solution behavior, rapid guessing behavior, and cheating behavior. Oftentimes examinees do not always solve all items due to various…
Descriptors: High Stakes Tests, Standardized Tests, Guessing (Tests), Cheating
Panahi, Ali; Mohebbi, Hassan – Language Teaching Research Quarterly, 2022
High stakes testing, such as IELTS, is designed to select individuals for decision-making purposes (Fulcher, 2013b). Hence, there is a slow-growing stream of research investigating the subskills of IELTS listening and, in feedback terms, its effects on individuals and educational programs. Here, cognitive diagnostic assessment (CDA) performs it…
Descriptors: Decision Making, Listening Comprehension Tests, Multiple Choice Tests, Diagnostic Tests
Johnson, Martin; Rushton, Nicky – Educational Research, 2019
Background: The development of a set of questions is a central element of examination development, with the validity of an examination resting to a large extent on the quality of the questions that it comprises. This paper reports on the methods and findings of a project that explores how educational examination question writers engage in the…
Descriptors: Writing (Composition), Test Construction, Specialists, Protocol Analysis
Cook, Robert J.; Durning, Steven J. – AERA Online Paper Repository, 2016
In an effort to better align item development to goals of assessing higher-order tasks and decision making, complex decision trees were developed to follow clinical reasoning scripts and used as models on which multiple-choice questions could be built. This approach is compatible with best-practice assessment frameworks like Evidence Centered…
Descriptors: Multiple Choice Tests, Decision Making, Models, Task Analysis
Shulruf, Boaz; Jones, Phil; Turner, Rolf – Higher Education Studies, 2015
The determination of Pass/Fail decisions over Borderline grades, (i.e., grades which do not clearly distinguish between the competent and incompetent examinees) has been an ongoing challenge for academic institutions. This study utilises the Objective Borderline Method (OBM) to determine examinee ability and item difficulty, and from that…
Descriptors: Undergraduate Students, Pass Fail Grading, Decision Making, Probability
Bramley, Tom – Cambridge Assessment, 2014
The aim of this study was to compare models of assessment structure for achieving differentiation between examinees of different levels of attainment in the GCSE in England. GCSEs are high-stakes curriculum-based public examinations taken by 16 year olds at the end of compulsory schooling. The context for the work was an intense period of debate…
Descriptors: Foreign Countries, Exit Examinations, Alternative Assessment, High Stakes Tests
Hauser, Carl; Thum, Yeow Meng; He, Wei; Ma, Lingling – Educational and Psychological Measurement, 2015
When conducting item reviews, analysts evaluate an array of statistical and graphical information to assess the fit of a field test (FT) item to an item response theory model. The process can be tedious, particularly when the number of human reviews (HR) to be completed is large. Furthermore, such a process leads to decisions that are susceptible…
Descriptors: Test Items, Item Response Theory, Research Methodology, Decision Making
Previous Page | Next Page »
Pages: 1 | 2