NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 10 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Yue Huang; Joshua Wilson – Journal of Computer Assisted Learning, 2025
Background: Automated writing evaluation (AWE) systems, used as formative assessment tools in writing classrooms, are promising for enhancing instruction and improving student performance. Although meta-analytic evidence supports AWE's effectiveness in various contexts, research on its effectiveness in the U.S. K-12 setting has lagged behind its…
Descriptors: Writing Evaluation, Writing Skills, Writing Tests, Writing Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Kevin Ackermans; Marjoke Bakker; Pierre Gorissen; Anne-Marieke Loon; Marijke Kral; Gino Camp – Journal of Computer Assisted Learning, 2024
Background: A practical test that measures the information and communication technology (ICT) skills students need for effectively using ICT in primary education has yet to be developed (Oh et al., 2021). This paper reports on the development, validation, and reliability of a test measuring primary school students' ICT skills required for…
Descriptors: Test Construction, Test Validity, Measures (Individuals), Elementary School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Mohammad Nayef Ayasrah; Mohamad Ahmad Saleem Khasawneh; Mazen Omar Almulla; Amoura Hassan Aboutaleb – Journal of Computer Assisted Learning, 2025
Background: One area that has been dramatically changed by artificial intelligence (AI) is educational environments. Chatbots, Recommender Systems, Adaptive Learning Systems and Large Language Models have been emerging as practical tools for facilitating learning. However, using such tools appropriately is challenging. In this regard, the…
Descriptors: Test Construction, Test Validity, Test Reliability, Rating Scales
Peer reviewed Peer reviewed
Direct linkDirect link
Whitelock-Wainwright, Alexander; Gaševic, Dragan; Tsai, Yi-Shan; Drachsler, Hendrik; Scheffel, Maren; Muñoz-Merino, Pedro J.; Tammets, Kairit; Delgado Kloos, Carlos – Journal of Computer Assisted Learning, 2020
To assist higher education institutions in meeting the challenge of limited student engagement in the implementation of Learning Analytics services, the Questionnaire for Student Expectations of Learning Analytics (SELAQ) was developed. This instrument contains 12 items, which are explained by a purported two-factor structure of "Ethical and…
Descriptors: Questionnaires, Test Construction, Test Validity, Learning Analytics
Peer reviewed Peer reviewed
Direct linkDirect link
Ali Alqarni – Journal of Computer Assisted Learning, 2025
Background: Critical thinking is essential in modern education, and artificial intelligence (AI) offers new possibilities for enhancing it. However, the lack of validated tools to assess teachers' AI-integrated pedagogical skills remains a challenge. Objectives: The current study aimed to develop and validate the Artificial Intelligence-Critical…
Descriptors: Artificial Intelligence, Technology Uses in Education, Test Construction, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Andersen, Martin S.; Makransky, Guido – Journal of Computer Assisted Learning, 2021
Measuring cognitive load is important in virtual learning environments (VLE). Thus, valid and reliable measures of cognitive load are important to support instructional design in VLE. Through three studies, we investigated the validity and reliability of Leppink's Cognitive Load Scale (CLS) and developed the extraneous cognitive load (EL)…
Descriptors: Test Construction, Test Validity, Test Reliability, Cognitive Processes
Peer reviewed Peer reviewed
Direct linkDirect link
Wafa Mohammed Aldighrir; Fatima's Mohamed Asiri – Journal of Computer Assisted Learning, 2025
Background: As educational institutions increasingly operate as multicultural hubs, leaders must navigate the complexities of cultural differences, language barriers and diverse learning styles in digital environments. These challenges are amplified by the lack of non-verbal cues and the asynchronous nature of online communication, which can lead…
Descriptors: Foreign Countries, Test Construction, Measures (Individuals), Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Hooker, J. F.; Denker, K. J.; Summers, M. E.; Parker, M. – Journal of Computer Assisted Learning, 2016
Previous research into the benefits student response systems (SRS) that have been brought into the classroom revealed that SRS can contribute positively to student experiences. However, while the benefits of SRS have been conceptualized and operationalized into a widely cited scale, the validity of this scale had not been tested. Furthermore,…
Descriptors: Technology Uses in Education, Factor Analysis, Audience Response Systems, Handheld Devices
Peer reviewed Peer reviewed
Direct linkDirect link
Chang, Wen-Hui; Liu, Yuan-Chen; Huang, Tzu-Hua – Journal of Computer Assisted Learning, 2017
The purpose of this study is to develop a multi-dimensional scale to measure students' awareness of key competencies for M-learning and to test its reliability and validity. The Key Competencies of Mobile Learning Scale (KCMLS) was determined via confirmatory factor analysis to have four dimensions: team collaboration, creative thinking, critical…
Descriptors: Test Construction, Multidimensional Scaling, Electronic Learning, Test Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Cheng, M.-T.; She, H.-C.; Annetta, L. A. – Journal of Computer Assisted Learning, 2015
Many studies have shown the positive impact of serious educational games (SEGs) on learning outcomes. However, there still exists insufficient research that delves into the impact of immersive experience in the process of gaming on SEG-based science learning. The dual purpose of this study was to further explore this impact. One purpose was to…
Descriptors: Science Instruction, Educational Games, Technology Uses in Education, Educational Technology