NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 25 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Liang, Xinya; Cao, Chunhua – Journal of Experimental Education, 2023
To evaluate multidimensional factor structure, a popular method that combines features of confirmatory and exploratory factor analysis is Bayesian structural equation modeling with small-variance normal priors (BSEM-N). This simulation study evaluated BSEM-N as a variable selection and parameter estimation tool in factor analysis with sparse…
Descriptors: Factor Analysis, Bayesian Statistics, Structural Equation Models, Simulation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kent Anderson Seidel – School Leadership Review, 2025
This paper examines one of three central diagnostic tools of the Concerns Based Adoption Model, the Stages of Concern Questionnaire (SoCQ). The SoCQ was developed with a focus on K12 education. It has been used widely since developed in 1973, in early childhood, higher education, medical, business, community, and military settings. The SoCQ…
Descriptors: Questionnaires, Educational Change, Educational Innovation, Intervention
Peer reviewed Peer reviewed
Direct linkDirect link
Markus T. Jansen; Ralf Schulze – Educational and Psychological Measurement, 2024
Thurstonian forced-choice modeling is considered to be a powerful new tool to estimate item and person parameters while simultaneously testing the model fit. This assessment approach is associated with the aim of reducing faking and other response tendencies that plague traditional self-report trait assessments. As a result of major recent…
Descriptors: Factor Analysis, Models, Item Analysis, Evaluation Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Castillo-Diaz, Marcio Alexander; Gomes, Cristiano Mauro Assis; Jelihovschi, Enio Galinkin – International Journal of Educational Methodology, 2022
The field of studies in metacognition points to some limitations in the way the construct has traditionally been measured and shows a near absence of performance-based tests. The Meta-Text is a performance-based test recently created to assess components of cognition regulation: planning, monitoring, and judgment. This study presents the first…
Descriptors: Schemata (Cognition), Decision Making, Undergraduate Students, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Hung, Su-Pin; Huang, Hung-Yu – Journal of Educational and Behavioral Statistics, 2022
To address response style or bias in rating scales, forced-choice items are often used to request that respondents rank their attitudes or preferences among a limited set of options. The rating scales used by raters to render judgments on ratees' performance also contribute to rater bias or errors; consequently, forced-choice items have recently…
Descriptors: Evaluation Methods, Rating Scales, Item Analysis, Preferences
Peer reviewed Peer reviewed
Direct linkDirect link
Zhao, Xin; Coxe, Stefany; Sibley, Margaret H.; Zulauf-McCurdy, Courtney; Pettit, Jeremy W. – Prevention Science, 2023
There has been increasing interest in applying integrative data analysis (IDA) to analyze data across multiple studies to increase sample size and statistical power. Measures of a construct are frequently not consistent across studies. This article provides a tutorial on the complex decisions that occur when conducting harmonization of measures…
Descriptors: Data Analysis, Sample Size, Decision Making, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
DeMars, Christine E. – Educational and Psychological Measurement, 2019
Previous work showing that revised parallel analysis can be effective with dichotomous items has used a two-parameter model and normally distributed abilities. In this study, both two- and three-parameter models were used with normally distributed and skewed ability distributions. Relatively minor skew and kurtosis in the underlying ability…
Descriptors: Item Analysis, Models, Error of Measurement, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Jin, Kuan-Yu; Wu, Yi-Jhen; Chen, Hui-Fang – Journal of Educational and Behavioral Statistics, 2022
For surveys of complex issues that entail multiple steps, multiple reference points, and nongradient attributes (e.g., social inequality), this study proposes a new multiprocess model that integrates ideal-point and dominance approaches into a treelike structure (IDtree). In the IDtree, an ideal-point approach describes an individual's attitude…
Descriptors: Likert Scales, Item Response Theory, Surveys, Responses
Peer reviewed Peer reviewed
Direct linkDirect link
Cohen, Dale J.; Cromley, Amanda R.; Freda, Katelyn E.; White, Madeline – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2022
Here, we present a strong test of the hypothesis that sacrificial moral dilemmas are solved using the same value-based decision mechanism that operates on decisions concerning economic goods. To test this hypothesis, we developed Psychological Value Theory. Psychological Value Theory is an expansion and generalization of Cohen and Ahn's (2016)…
Descriptors: Hypothesis Testing, Decision Making, Moral Values, Problem Solving
Peer reviewed Peer reviewed
Direct linkDirect link
Joo, Seang-Hwane; Lee, Philseok; Stark, Stephen – Journal of Educational Measurement, 2018
This research derived information functions and proposed new scalar information indices to examine the quality of multidimensional forced choice (MFC) items based on the RANK model. We also explored how GGUM-RANK information, latent trait recovery, and reliability varied across three MFC formats: pairs (two response alternatives), triplets (three…
Descriptors: Item Response Theory, Models, Item Analysis, Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Langbeheim, Elon; Ben-Eliyahu, Einat; Adadan, Emine; Akaygun, Sevil; Ramnarain, Umesh Dewnarain – Chemistry Education Research and Practice, 2022
Learning progressions (LPs) are novel models for the development of assessments in science education, that often use a scale to categorize students' levels of reasoning. Pictorial representations are important in chemistry teaching and learning, and also in LPs, but the differences between pictorial and verbal items in chemistry LPs is unclear. In…
Descriptors: Science Instruction, Learning Trajectories, Chemistry, Thinking Skills
Jing Lu; Chun Wang; Ningzhong Shi – Grantee Submission, 2023
In high-stakes, large-scale, standardized tests with certain time limits, examinees are likely to engage in either one of the three types of behavior (e.g., van der Linden & Guo, 2008; Wang & Xu, 2015): solution behavior, rapid guessing behavior, and cheating behavior. Oftentimes examinees do not always solve all items due to various…
Descriptors: High Stakes Tests, Standardized Tests, Guessing (Tests), Cheating
Peer reviewed Peer reviewed
Direct linkDirect link
Johnson, Martin; Rushton, Nicky – Educational Research, 2019
Background: The development of a set of questions is a central element of examination development, with the validity of an examination resting to a large extent on the quality of the questions that it comprises. This paper reports on the methods and findings of a project that explores how educational examination question writers engage in the…
Descriptors: Writing (Composition), Test Construction, Specialists, Protocol Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sekercioglu, Güçlü – International Online Journal of Education and Teaching, 2018
An empirical evidence for independent samples of a population regarding measurement invariance implies that factor structure of a measurement tool is equal across these samples; in other words, it measures the intended psychological trait within the same structure. In this case, the evidence of construct validity would be strengthened within the…
Descriptors: Factor Analysis, Error of Measurement, Factor Structure, Construct Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Cook, Robert J.; Durning, Steven J. – AERA Online Paper Repository, 2016
In an effort to better align item development to goals of assessing higher-order tasks and decision making, complex decision trees were developed to follow clinical reasoning scripts and used as models on which multiple-choice questions could be built. This approach is compatible with best-practice assessment frameworks like Evidence Centered…
Descriptors: Multiple Choice Tests, Decision Making, Models, Task Analysis
Previous Page | Next Page »
Pages: 1  |  2