Publication Date
In 2025 | 3 |
Since 2024 | 5 |
Since 2021 (last 5 years) | 8 |
Since 2016 (last 10 years) | 12 |
Descriptor
Source
Journal of Computer Assisted… | 12 |
Author
Alexandron, Giora | 1 |
Ali Alqarni | 1 |
Andersen, Martin S. | 1 |
Anne-Marieke Loon | 1 |
Berg, Aviram | 1 |
Carlton Wood | 1 |
Chang, Wen-Hui | 1 |
Delgado Kloos, Carlos | 1 |
Denker, K. J. | 1 |
Drachsler, Hendrik | 1 |
Gaševic, Dragan | 1 |
More ▼ |
Publication Type
Journal Articles | 12 |
Reports - Research | 12 |
Tests/Questionnaires | 1 |
Education Level
Elementary Education | 3 |
Higher Education | 3 |
Postsecondary Education | 3 |
Secondary Education | 2 |
Elementary Secondary Education | 1 |
Audience
Location
Estonia | 1 |
Netherlands | 1 |
Saudi Arabia | 1 |
Spain | 1 |
Taiwan | 1 |
United Kingdom | 1 |
Utah | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Umar Alkafaween; Ibrahim Albluwi; Paul Denny – Journal of Computer Assisted Learning, 2025
Background: Automatically graded programming assignments provide instant feedback to students and significantly reduce manual grading time for instructors. However, creating comprehensive suites of test cases for programming problems within automatic graders can be time-consuming and complex. The effort needed to define test suites may deter some…
Descriptors: Automation, Grading, Introductory Courses, Programming
Alexandron, Giora; Wiltrout, Mary Ellen; Berg, Aviram; Gershon, Sa'ar Karp; Ruipérez-Valiente, José A. – Journal of Computer Assisted Learning, 2023
Background: Massive Open Online Courses (MOOCs) have touted the idea of democratizing education, but soon enough, this utopian idea collided with the reality of finding sustainable business models. In addition, the promise of harnessing interactive and social web technologies to promote meaningful learning was only partially successful. And…
Descriptors: MOOCs, Evaluation, Models, Learner Engagement
Yue Huang; Joshua Wilson – Journal of Computer Assisted Learning, 2025
Background: Automated writing evaluation (AWE) systems, used as formative assessment tools in writing classrooms, are promising for enhancing instruction and improving student performance. Although meta-analytic evidence supports AWE's effectiveness in various contexts, research on its effectiveness in the U.S. K-12 setting has lagged behind its…
Descriptors: Writing Evaluation, Writing Skills, Writing Tests, Writing Instruction
Kevin Ackermans; Marjoke Bakker; Pierre Gorissen; Anne-Marieke Loon; Marijke Kral; Gino Camp – Journal of Computer Assisted Learning, 2024
Background: A practical test that measures the information and communication technology (ICT) skills students need for effectively using ICT in primary education has yet to be developed (Oh et al., 2021). This paper reports on the development, validation, and reliability of a test measuring primary school students' ICT skills required for…
Descriptors: Test Construction, Test Validity, Measures (Individuals), Elementary School Students
Maria Aristeidou; Simon Cross; Klaus-Dieter Rossade; Carlton Wood; Terri Rees; Patrizia Paci – Journal of Computer Assisted Learning, 2024
Background: Research into online exams in higher education has grown significantly, especially as they became common practice during the COVID-19 pandemic. However, previous studies focused on understanding individual factors that relate to students' dispositions towards online exams in 'traditional' universities. Moreover, there is little…
Descriptors: Higher Education, Computer Assisted Testing, COVID-19, Pandemics
Whitelock-Wainwright, Alexander; Gaševic, Dragan; Tsai, Yi-Shan; Drachsler, Hendrik; Scheffel, Maren; Muñoz-Merino, Pedro J.; Tammets, Kairit; Delgado Kloos, Carlos – Journal of Computer Assisted Learning, 2020
To assist higher education institutions in meeting the challenge of limited student engagement in the implementation of Learning Analytics services, the Questionnaire for Student Expectations of Learning Analytics (SELAQ) was developed. This instrument contains 12 items, which are explained by a purported two-factor structure of "Ethical and…
Descriptors: Questionnaires, Test Construction, Test Validity, Learning Analytics
Ali Alqarni – Journal of Computer Assisted Learning, 2025
Background: Critical thinking is essential in modern education, and artificial intelligence (AI) offers new possibilities for enhancing it. However, the lack of validated tools to assess teachers' AI-integrated pedagogical skills remains a challenge. Objectives: The current study aimed to develop and validate the Artificial Intelligence-Critical…
Descriptors: Artificial Intelligence, Technology Uses in Education, Test Construction, Test Validity
Luo, Yi Fang; Yang, Shu Ching; Lu, Chia Mei – Journal of Computer Assisted Learning, 2021
Information technology provides the potential for polychronic learning. However, research on polychronicity in the educational field is scarce. The purposes of this study were to develop a multidimensional polychronicity scale for information technology learning and explore the relationship between polychronicity in information…
Descriptors: Information Technology, Measures (Individuals), Electronic Learning, Time Management
Andersen, Martin S.; Makransky, Guido – Journal of Computer Assisted Learning, 2021
Measuring cognitive load is important in virtual learning environments (VLE). Thus, valid and reliable measures of cognitive load are important to support instructional design in VLE. Through three studies, we investigated the validity and reliability of Leppink's Cognitive Load Scale (CLS) and developed the extraneous cognitive load (EL)…
Descriptors: Test Construction, Test Validity, Test Reliability, Cognitive Processes
Hsia, Yen-Teh; Jong, Bin-Shyan; Lin, Tsong-Wuu; Liao, Ji-Yang – Journal of Computer Assisted Learning, 2019
Suppose learners use their free time to go online to review course materials, and they do so by taking optional tests that consist of multiple-choice questions (MCQs). What will happen if, for every practice question, there is always a choice (out of four possible choices) that is marked as "the (current) hot choice?" Will this make any…
Descriptors: Multiple Choice Tests, Test Preparation, Learning Processes, Test Construction
Hooker, J. F.; Denker, K. J.; Summers, M. E.; Parker, M. – Journal of Computer Assisted Learning, 2016
Previous research into the benefits student response systems (SRS) that have been brought into the classroom revealed that SRS can contribute positively to student experiences. However, while the benefits of SRS have been conceptualized and operationalized into a widely cited scale, the validity of this scale had not been tested. Furthermore,…
Descriptors: Technology Uses in Education, Factor Analysis, Audience Response Systems, Handheld Devices
Chang, Wen-Hui; Liu, Yuan-Chen; Huang, Tzu-Hua – Journal of Computer Assisted Learning, 2017
The purpose of this study is to develop a multi-dimensional scale to measure students' awareness of key competencies for M-learning and to test its reliability and validity. The Key Competencies of Mobile Learning Scale (KCMLS) was determined via confirmatory factor analysis to have four dimensions: team collaboration, creative thinking, critical…
Descriptors: Test Construction, Multidimensional Scaling, Electronic Learning, Test Reliability