NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20255
Since 202425
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 25 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Ye Ma; Deborah J. Harris – Educational Measurement: Issues and Practice, 2025
Item position effect (IPE) refers to situations where an item performs differently when it is administered in different positions on a test. The majority of previous research studies have focused on investigating IPE under linear testing. There is a lack of IPE research under adaptive testing. In addition, the existence of IPE might violate Item…
Descriptors: Computer Assisted Testing, Adaptive Testing, Item Response Theory, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Meljun Barnayha; Gamaliel Gonzales; Rachel Lavador; Jessamae Martel; Ma. Kathleen Urot; Roselyn Gonzales – Psychology in the Schools, 2025
This study examines the determinants of online academic dishonesty using the theory of planned behavior. We surveyed 1087 college students in Central Philippines and utilized a partial least squares-structural equation modeling analysis to evaluate a proposed model. Results demonstrate that 10 of the 11 hypothesized relationships are statistically…
Descriptors: Self Control, Cheating, Intervention, Ethics
Peer reviewed Peer reviewed
Direct linkDirect link
Meijuan Li; Hongyun Liu; Mengfei Cai; Jianlin Yuan – Education and Information Technologies, 2024
In the human-to-human Collaborative Problem Solving (CPS) test, students' problem-solving process reflects the interdependency among partners. The high interdependency in CPS makes it very sensitive to group composition. For example, the group outcome might be driven by a highly competent group member, so it does not reflect all the individual…
Descriptors: Problem Solving, Computer Assisted Testing, Cooperative Learning, Task Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Osman Tat; Abdullah Faruk Kilic – Turkish Online Journal of Distance Education, 2024
The widespread availability of internet access in daily life has resulted in a greater acceptance of online assessment methods. E-assessment platforms offer various features such as randomizing questions and answers, utilizing extensive question banks, setting time limits, and managing access during online exams. Electronic assessment enables…
Descriptors: Test Construction, Test Validity, Test Reliability, Anxiety
Peer reviewed Peer reviewed
Direct linkDirect link
Esther Ulitzsch; Janine Buchholz; Hyo Jeong Shin; Jonas Bertling; Oliver Lüdtke – Large-scale Assessments in Education, 2024
Common indicator-based approaches to identifying careless and insufficient effort responding (C/IER) in survey data scan response vectors or timing data for aberrances, such as patterns signaling straight lining, multivariate outliers, or signals that respondents rushed through the administered items. Each of these approaches is susceptible to…
Descriptors: Response Style (Tests), Attention, Achievement Tests, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Yang Du; Susu Zhang – Journal of Educational and Behavioral Statistics, 2025
Item compromise has long posed challenges in educational measurement, jeopardizing both test validity and test security of continuous tests. Detecting compromised items is therefore crucial to address this concern. The present literature on compromised item detection reveals two notable gaps: First, the majority of existing methods are based upon…
Descriptors: Item Response Theory, Item Analysis, Bayesian Statistics, Educational Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Yongze Xu – Educational and Psychological Measurement, 2024
The questionnaire method has always been an important research method in psychology. The increasing prevalence of multidimensional trait measures in psychological research has led researchers to use longer questionnaires. However, questionnaires that are too long will inevitably reduce the quality of the completed questionnaires and the efficiency…
Descriptors: Item Response Theory, Questionnaires, Generalization, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Markus T. Jansen; Ralf Schulze – Educational and Psychological Measurement, 2024
Thurstonian forced-choice modeling is considered to be a powerful new tool to estimate item and person parameters while simultaneously testing the model fit. This assessment approach is associated with the aim of reducing faking and other response tendencies that plague traditional self-report trait assessments. As a result of major recent…
Descriptors: Factor Analysis, Models, Item Analysis, Evaluation Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Daniel M. Settlage; Jim R. Wollscheid – Journal of the Scholarship of Teaching and Learning, 2024
The examination of the testing mode effect has received increased attention as higher education has shifted to remote testing during the COVID-19 pandemic. We believe the testing mode effect consists of four components: the ability to physically write on the test, the method of answer recording, the proctoring/testing environment, and the effect…
Descriptors: College Students, Macroeconomics, Tests, Answer Sheets
Peer reviewed Peer reviewed
Direct linkDirect link
Stefanie A. Wind; Beyza Aksu-Dunya – Applied Measurement in Education, 2024
Careless responding is a pervasive concern in research using affective surveys. Although researchers have considered various methods for identifying careless responses, studies are limited that consider the utility of these methods in the context of computer adaptive testing (CAT) for affective scales. Using a simulation study informed by recent…
Descriptors: Response Style (Tests), Computer Assisted Testing, Adaptive Testing, Affective Measures
Collette Marie Lere' London – ProQuest LLC, 2024
The purpose of this quantitative, quasi-experimental study was to determine if, and to extent, there is a statistically significant difference between pre and posttest critical thinking scores of U.S. Navy Operations Specialist A-school participants in an adaptive technology training environment and those in the traditional learning environment.…
Descriptors: Military Training, Armed Forces, Critical Thinking, Skill Development
Peer reviewed Peer reviewed
Direct linkDirect link
Esther Ulitzsch; Qiwei He; Steffi Pohl – Grantee Submission, 2024
This is an editorial for a special issue "Innovations in Exploring Sequential Process Data" in the journal Zeitschrift für Psychologie. Process data refer to log files generated by human-computer interactive items. They document the entire process, including keystrokes, mouse clicks as well as the associated time stamps, performed by a…
Descriptors: Educational Innovation, Man Machine Systems, Educational Technology, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Kaiwen Man – Educational and Psychological Measurement, 2024
In various fields, including college admission, medical board certifications, and military recruitment, high-stakes decisions are frequently made based on scores obtained from large-scale assessments. These decisions necessitate precise and reliable scores that enable valid inferences to be drawn about test-takers. However, the ability of such…
Descriptors: Prior Learning, Testing, Behavior, Artificial Intelligence
Peer reviewed Peer reviewed
Direct linkDirect link
Esther Ulitzsch; Steffi Pohl; Lale Khorramdel; Ulf Kroehne; Matthias von Davier – Journal of Educational and Behavioral Statistics, 2024
Questionnaires are by far the most common tool for measuring noncognitive constructs in psychology and educational sciences. Response bias may pose an additional source of variation between respondents that threatens validity of conclusions drawn from questionnaire data. We present a mixture modeling approach that leverages response time data from…
Descriptors: Item Response Theory, Response Style (Tests), Questionnaires, Secondary School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Xuelan Qiu; Jimmy de la Torre; You-Gan Wang; Jinran Wu – Educational Measurement: Issues and Practice, 2024
Multidimensional forced-choice (MFC) items have been found to be useful to reduce response biases in personality assessments. However, conventional scoring methods for the MFC items result in ipsative data, hindering the wider applications of the MFC format. In the last decade, a number of item response theory (IRT) models have been developed,…
Descriptors: Item Response Theory, Personality Traits, Personality Measures, Personality Assessment
Previous Page | Next Page »
Pages: 1  |  2