NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Does not meet standards1
Showing all 14 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kusairi, Sentot – Journal of Technology and Science Education, 2020
Formative feedback plays an important role in assisting students in their learning process. However, administering information about student weaknesses and strengths is one of the challenges faced by teachers when implementing formative assessment. This study aims to develop a web-based formative feedback system that is able to provide specific…
Descriptors: Student Evaluation, Formative Evaluation, Feedback (Response), Computer Assisted Testing
Joe Olsen; Amy Adair; Janice Gobert; Michael Sao Pedro; Mariel O'Brien – Grantee Submission, 2022
Many national science frameworks (e.g., Next Generation Science Standards) argue that developing mathematical modeling competencies is critical for students' deep understanding of science. However, science teachers may be unprepared to assess these competencies. We are addressing this need by developing virtual lab performance assessments that…
Descriptors: Mathematical Models, Intelligent Tutoring Systems, Performance Based Assessment, Data Collection
Peer reviewed Peer reviewed
Direct linkDirect link
Tan, Kim Chwee Daniel; Taber, Keith S.; Liew, Yong Qiang; Teo, Kay Liang Alan – Chemistry Education Research and Practice, 2019
The internet is prevalent in society today, and user-friendly web-based productivity tools are readily available for developing diagnostic instruments. This study sought to determine the affordances of a web-based diagnostic instrument on ionisation energy (wIEDI) based on the pen-and-paper version, the Ionisation Energy Diagnostic Instrument…
Descriptors: Energy, Secondary School Science, Chemistry, Diagnostic Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
El Rassi, Mary Ann Barbour – International Association for Development of the Information Society, 2019
It has long been debated whether the Open-Book-Open-Web exam was useful and efficient as the traditional closed book exams. Some scholars and practitioners have doubted the efficiency and the possibility of cheating in the OBOW as it is not directly monitored. This paper tends to investigate the effectiveness of OBOW exams by comparing them with…
Descriptors: Developing Nations, Test Format, Tests, Cheating
Peer reviewed Peer reviewed
Direct linkDirect link
Kuo, Bor-Chen; Liao, Chen-Huei; Pai, Kai-Chih; Shih, Shu-Chuan; Li, Cheng-Hsuan; Mok, Magdalena Mo Ching – Educational Psychology, 2020
The current study explores students' collaboration and problem solving (CPS) abilities using a human-to-agent (H-A) computer-based collaborative problem solving assessment. Five CPS assessment units with 76 conversation-based items were constructed using the PISA 2015 CPS framework. In the experiment, 53,855 ninth and tenth graders in Taiwan were…
Descriptors: Computer Assisted Testing, Cooperative Learning, Problem Solving, Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Abidin, Aang Zainul; Istiyono, Edi; Fadilah, Nunung; Dwandaru, Wipsar Sunu Brams – International Journal of Evaluation and Research in Education, 2019
Classical assessments that are not comprehensive and do not distinguish students' initial abilities make measurement results far from the actual abilities. This study was conducted to produce a computerized adaptive test for physics critical thinking skills (CAT-PhysCriTS) that met the feasibility criteria. The test was presented for the physics…
Descriptors: Foreign Countries, High School Students, Grade 11, Physics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Albacete, Patricia; Silliman, Scott; Jordan, Pamela – Grantee Submission, 2017
Intelligent tutoring systems (ITS), like human tutors, try to adapt to student's knowledge level so that the instruction is tailored to their needs. One aspect of this adaptation relies on the ability to have an understanding of the student's initial knowledge so as to build on it, avoiding teaching what the student already knows and focusing on…
Descriptors: Intelligent Tutoring Systems, Knowledge Level, Multiple Choice Tests, Computer Assisted Testing
Çetinavci, Ugur Recep; Öztürk, Ismet – Online Submission, 2017
Pragmatic competence is among the explicitly acknowledged sub-competences that make the communicative competence in any language (Bachman & Palmer, 1996; Council of Europe, 2001). Within the notion of pragmatic competence itself, "implicature (implied meanings)" comes to the fore as one of the five main areas there (Levinson, 1983).…
Descriptors: Test Construction, Computer Assisted Testing, Communicative Competence (Languages), Second Language Instruction
Zoanetti, Nathan; Les, Magdalena; Leigh-Lancaster, David – Mathematics Education Research Group of Australasia, 2014
From 2011-2013 the VCAA conducted a trial aligning the use of computers in curriculum, pedagogy and assessment culminating in a group of 62 volunteer students sitting their end of Year 12 technology-active Mathematical Methods (CAS) Examination 2 as a computer-based examination. This paper reports on statistical modelling undertaken to compare the…
Descriptors: Computer Assisted Testing, Comparative Analysis, Mathematical Concepts, Mathematics Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Wan, Lei; Henly, George A. – Applied Measurement in Education, 2012
Many innovative item formats have been proposed over the past decade, but little empirical research has been conducted on their measurement properties. This study examines the reliability, efficiency, and construct validity of two innovative item formats--the figural response (FR) and constructed response (CR) formats used in a K-12 computerized…
Descriptors: Test Items, Test Format, Computer Assisted Testing, Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Lissitz, Robert W.; Hou, Xiaodong; Slater, Sharon Cadman – Journal of Applied Testing Technology, 2012
This article investigates several questions regarding the impact of different item formats on measurement characteristics. Constructed response (CR) items and multiple choice (MC) items obviously differ in their formats and in the resources needed to score them. As such, they have been the subject of considerable discussion regarding the impact of…
Descriptors: Computer Assisted Testing, Scoring, Evaluation Problems, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Springer, Robert; Pugalee, David; Algozzine, Bob – Clearing House: A Journal of Educational Strategies, Issues and Ideas, 2007
In U.S. schools, students must pass statewide competency tests to graduate from high school. In this article, the authors summarize the development and testing of a program implemented to improve the skills of students failing to "make the grade" on these high-stakes tests. District personnel randomly assigned twenty-eight students who…
Descriptors: Mathematics Tests, High Stakes Tests, Mathematics Skills, Skill Development
Peer reviewed Peer reviewed
Direct linkDirect link
Yun, Seongchul; Miller, Paul Chamness; Baek, Youngkyun; Jung, Jaeyeob; Ko, Myunghwan – Educational Technology & Society, 2008
The purpose of this study is to investigate the effectiveness of response modes by item and feedback type in a web-based language learning program. The subjects of this study, 122 Korean tenth graders learning English as a foreign language, were placed into groups of four and were given a web-based language learning program consisting of two…
Descriptors: Feedback (Response), Test Format, Multiple Choice Tests, Second Language Learning
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Handwerk, Phil – ETS Research Report Series, 2007
Online high schools are growing significantly in number, popularity, and function. However, little empirical data has been published about the effectiveness of these institutions. This research examined the frequency of group work and extended essay writing among online Advanced Placement Program® (AP®) students, and how these tasks may have…
Descriptors: Advanced Placement Programs, Advanced Placement, Computer Assisted Testing, Models