NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 256 to 270 of 9,533 results Save | Export
Kane, Jesse F. – ProQuest LLC, 2023
The idea of student engagement as a predictor of student success was first introduced by Alexander Astin (1974; 1984) who studied student involvement. The connection of student involvement and student success has led to the focus on student and how we measure it to ensure that institutions are doing all they can to improve outcomes. Nothing has…
Descriptors: Learner Engagement, College Freshmen, College Seniors, Student Surveys
Peer reviewed Peer reviewed
Direct linkDirect link
Pearson, Christopher; Penna, Nigel – Assessment & Evaluation in Higher Education, 2023
E-assessments are becoming increasingly common and progressively more complex. Consequently, how these longer, more complex questions are designed and marked is imperative. This article uses the NUMBAS e-assessment tool to investigate the best practice for creating longer questions and their mark schemes on surveying modules taken by engineering…
Descriptors: Automation, Scoring, Engineering Education, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Bingxue Zhang; Yang Shi; Yuxing Li; Chengliang Chai; Longfeng Hou – Interactive Learning Environments, 2023
The adaptive learning environment provides learning support that suits individual characteristics of students, and the student model of the adaptive learning environment is the key element to promote individualized learning. This paper provides a systematic overview of the existing student models, consequently showing that the Elo rating system…
Descriptors: Electronic Learning, Models, Students, Individualized Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Daniel Jurich; Chunyan Liu – Applied Measurement in Education, 2023
Screening items for parameter drift helps protect against serious validity threats and ensure score comparability when equating forms. Although many high-stakes credentialing examinations operate with small sample sizes, few studies have investigated methods to detect drift in small sample equating. This study demonstrates that several newly…
Descriptors: High Stakes Tests, Sample Size, Item Response Theory, Equated Scores
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bruno D. Zumbo – International Journal of Assessment Tools in Education, 2023
In line with the journal volume's theme, this essay considers lessons from the past and visions for the future of test validity. In the first part of the essay, a description of historical trends in test validity since the early 1900s leads to the natural question of whether the discipline has progressed in its definition and description of test…
Descriptors: Test Theory, Test Validity, True Scores, Definitions
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Vural-Batik, Meryem; Örs-Özdil, Selda; Afyonkale-Talay, Necla – International Journal of Assessment Tools in Education, 2023
It is said that working on forgiveness in psychological counseling will significantly benefit the individual, taking into account the good consequences of forgiving on the individual. This study aimed to develop a measurement tool for determining self-efficacy to work on forgiveness in counseling (SSWOFIC). The most commonly regarded forgiveness…
Descriptors: Self Efficacy, Test Construction, Counseling, Interpersonal Relationship
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Metsämuuronen, Jari – Practical Assessment, Research & Evaluation, 2022
This article discusses visual techniques for detecting test items that would be optimal to be selected to the final compilation on the one hand and, on the other hand, to out-select those items that would lower the quality of the compilation. Some classic visual tools are discussed, first, in a practical manner in diagnosing the logical,…
Descriptors: Test Items, Item Analysis, Item Response Theory, Cutting Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Wind, Stefanie A. – Educational and Psychological Measurement, 2022
Researchers frequently use Mokken scale analysis (MSA), which is a nonparametric approach to item response theory, when they have relatively small samples of examinees. Researchers have provided some guidance regarding the minimum sample size for applications of MSA under various conditions. However, these studies have not focused on item-level…
Descriptors: Nonparametric Statistics, Item Response Theory, Sample Size, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Ranger, Jochen; Brauer, Kay – Journal of Educational and Behavioral Statistics, 2022
The generalized S-X[superscript 2]-test is a test of item fit for items with polytomous responses format. The test is based on a comparison of the observed and expected number of responses in strata defined by the test score. In this article, we make four contributions. We demonstrate that the performance of the generalized S-X[superscript 2]-test…
Descriptors: Goodness of Fit, Test Items, Statistical Analysis, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Spataro, Pietro; Mulligan, Neil W.; Cestari, Vincenzo; Santirocchi, Alessandro; Saraulli, Daniele; Rossi-Arnaud, Clelia – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2022
In the Attentional Boost Effect (ABE), words or images encoded with to-be-detected target squares are later recognized better than words or images encoded with to-be-ignored distractor squares. The present study sought to determine whether the ABE enhanced the encoding of the item-specific and relational properties of the studied words by using…
Descriptors: Attention, Memory, Multiple Choice Tests, Recall (Psychology)
Peer reviewed Peer reviewed
Direct linkDirect link
Joo, Seang-Hwane; Lee, Philseok – Journal of Educational Measurement, 2022
Abstract This study proposes a new Bayesian differential item functioning (DIF) detection method using posterior predictive model checking (PPMC). Item fit measures including infit, outfit, observed score distribution (OSD), and Q1 were considered as discrepancy statistics for the PPMC DIF methods. The performance of the PPMC DIF method was…
Descriptors: Test Items, Bayesian Statistics, Monte Carlo Methods, Prediction
Peer reviewed Peer reviewed
Direct linkDirect link
Svicher, Andrea; Gori, Alessio; Di Fabio, Annamaria – Australian Journal of Career Development, 2022
The present study examined the Italian version of the Work as Meaning Inventory and Work as Meaning Inventory for University students through a network perspective. Network analysis was applied to 505 Italian workers assessed via the Work as Meaning Inventory and 214 Italian university students assessed via the Work as Meaning Inventory for…
Descriptors: Foreign Countries, Employees, College Students, Network Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Guo, Hongwen; Lu, Ru; Johnson, Matthew S.; McCaffrey, Dan F. – ETS Research Report Series, 2022
It is desirable for an educational assessment to be constructed of items that can differentiate different performance levels of test takers, and thus it is important to estimate accurately the item discrimination parameters in either classical test theory or item response theory. It is particularly challenging to do so when the sample sizes are…
Descriptors: Test Items, Item Response Theory, Item Analysis, Educational Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Kunz, Tanja; Meitinger, Katharina – Field Methods, 2022
Although list-style open-ended questions generally help us gain deeper insights into respondents' thoughts, opinions, and behaviors, the quality of responses is often compromised. We tested a dynamic and a follow-up design to motivate respondents to give higher quality responses than with a static design, but without overburdening them. Our…
Descriptors: Online Surveys, Item Response Theory, Test Items, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Weimeng; Liu, Yang; Liu, Hongyun – Journal of Educational and Behavioral Statistics, 2022
Differential item functioning (DIF) occurs when the probability of endorsing an item differs across groups for individuals with the same latent trait level. The presence of DIF items may jeopardize the validity of an instrument; therefore, it is crucial to identify DIF items in routine operations of educational assessment. While DIF detection…
Descriptors: Test Bias, Test Items, Equated Scores, Regression (Statistics)
Pages: 1  |  ...  |  14  |  15  |  16  |  17  |  18  |  19  |  20  |  21  |  22  |  ...  |  636