NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Showing all 10 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Ferrari-Bridgers, Franca – International Journal of Listening, 2023
While many tools exist to assess student content knowledge, there are few that assess whether students display the critical listening skills necessary to interpret the quality of a speaker's message at the college level. The following research provides preliminary evidence for the internal consistency and factor structure of a tool, the…
Descriptors: Factor Structure, Test Validity, Community College Students, Test Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Quinn-Nilas, Christopher; Kennett, Deborah J.; Maki, Karen – Educational Psychology, 2019
This study tested the factor structure of the explanatory style for failure (ESF) using confirmatory factor analysis (CFA), tested measurement invariant across direct entry and transfer students, tested latent mean differences between these groups on the ESF factors, and tested a theoretical path model, whereby ESF explains variance in grades…
Descriptors: Factor Structure, Failure, College Transfer Students, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Eitel, Alexander; Prinz, Anja; Kollmer, Julia; Niessen, Lea; Russow, Jessica; Ludäscher, Marvin; Renkl, Alexander; Lindner, Marlit Annalena – Psychology Learning and Teaching, 2021
In this study, we present the newly developed "Misconceptions about Multimedia Learning Questionnaire" (MMLQ), we evaluate its psychometric properties (item difficulties, scale reliabilities, and internal structure), and we use it to examine the prevalence of four different misconceptions about multimedia learning in student teachers and…
Descriptors: Multimedia Instruction, Misconceptions, Questionnaires, Test Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Novak, Elena; McDaniel, Kerrie; Daday, Jerry; Soyturk, Ilker – British Journal of Educational Technology, 2022
e-Textbooks and e-learning technologies have become ubiquitous in college and university courses as faculty seek out ways to provide more engaging, flexible and customizable learning opportunities for students. However, the same technologies that support learning can serve as a source of frustration. Research on frustration with technology is…
Descriptors: Electronic Learning, Electronic Publishing, Textbooks, Student Attitudes
Peer reviewed Peer reviewed
Direct linkDirect link
Geramipour, Masoud – Language Testing in Asia, 2021
Rasch testlet and bifactor models are two measurement models that could deal with local item dependency (LID) in assessing the dimensionality of reading comprehension testlets. This study aimed to apply the measurement models to real item response data of the Iranian EFL reading comprehension tests and compare the validity of the bifactor models…
Descriptors: Foreign Countries, Second Language Learning, English (Second Language), Reading Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Dönmez, Onur; Akbulut, Yavuz; Telli, Esra; Kaptan, Miray; Özdemir, Ibrahim H.; Erdem, Mukaddes – Education and Information Technologies, 2022
In the current study, we aimed to develop a reliable and valid scale to address individual cognitive load types. Existing scale development studies involved limited number of items without adequate convergent, discriminant and criterion validity checks. Through a multistep correlational study, we proposed a three-factor scale with 13 items to…
Descriptors: Test Construction, Content Validity, Construct Validity, Test Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Kozan, Kadir – AERA Online Paper Repository, 2017
The purpose of the present small-scale study was to investigate the factor structure of an adapted, 13-item subjective English cognitive load scale in a fully online learning context. To serve this purpose, adult online learners enrolled in a fully online graduate program completed the adapted English cognitive load scale at the end of an…
Descriptors: Factor Structure, Test Validity, Difficulty Level, Cognitive Processes
Peer reviewed Peer reviewed
Direct linkDirect link
Bolkan, San – Communication Education, 2017
This study was conducted to create a reliable and valid low- to medium-inference, multidimensional measure of instructor clarity from seminal work across several academic fields. Five hundred and sixty six students responded to an initial pool of 87 items and were split into two samples to investigate the factor structure of the new measurement…
Descriptors: Test Construction, Test Validity, Measures (Individuals), Factor Structure
Liu, Hsin-min – ProQuest LLC, 2014
One of the fundamental problems in language testing is the lack of adequate generalizability between what a test is measuring and what fulfills the learners' real world language use needs. It is important to recognize that no matter how precise a test measures a construct, if the way that a construct is defined and the way that test tasks are…
Descriptors: Reading Tests, Language Tests, Task Analysis, Generalizability Theory
Yoon, So Yoon – ProQuest LLC, 2011
Working under classical test theory (CTT) and item response theory (IRT) frameworks, this study investigated psychometric properties of the Revised Purdue Spatial Visualization Tests: Visualization of Rotations (Revised PSVT:R). The original version, the PSVT:R was designed by Guay (1976) to measure spatial visualization ability in…
Descriptors: Undergraduate Students, Test Bias, Guessing (Tests), Construct Validity