Publication Date
In 2025 | 142 |
Since 2024 | 471 |
Since 2021 (last 5 years) | 1645 |
Since 2016 (last 10 years) | 2973 |
Since 2006 (last 20 years) | 4872 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
Practitioners | 181 |
Researchers | 145 |
Teachers | 120 |
Policymakers | 38 |
Administrators | 36 |
Students | 15 |
Counselors | 9 |
Parents | 4 |
Media Staff | 3 |
Support Staff | 3 |
Location
Australia | 167 |
United Kingdom | 152 |
Turkey | 124 |
China | 114 |
Germany | 107 |
Canada | 105 |
Spain | 91 |
Taiwan | 88 |
Netherlands | 72 |
Iran | 68 |
United States | 67 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Meets WWC Standards without Reservations | 4 |
Meets WWC Standards with or without Reservations | 4 |
Does not meet standards | 5 |
Dutta, Atanu Kumar; Goswami, Kalyan; Murugaiyan, Sathishbabu; Sahoo, Sibasish; Pal, Amit; Paul, Chandramallika; Thallapaneni, Sasikala; Biswas, Soham – Biochemistry and Molecular Biology Education, 2020
The coronavirus (COVID-19) pandemic is forcing the medical educators to innovate and embrace online education and assessment platform. One of the most significant challenges we are facing is the formative assessment of practical skills in the undergraduate medical biochemistry education. We have designed the electronic objectively structured…
Descriptors: COVID-19, Pandemics, Medical Education, Electronic Learning
DiCerbo, Kristen – Educational Measurement: Issues and Practice, 2020
We have the ability to capture data from students' interactions with digital environments as they engage in learning activity. This provides the potential for a reimagining of assessment to one in which assessment become part of our natural education activity and can be used to support learning. These new data allow us to more closely examine the…
Descriptors: Student Diversity, Information Technology, Learning Activities, Learning Processes
Arslan, Burcu; Jiang, Yang; Keehner, Madeleine; Gong, Tao; Katz, Irvin R.; Yan, Fred – Educational Measurement: Issues and Practice, 2020
Computer-based educational assessments often include items that involve drag-and-drop responses. There are different ways that drag-and-drop items can be laid out and different choices that test developers can make when designing these items. Currently, these decisions are based on experts' professional judgments and design constraints, rather…
Descriptors: Test Items, Computer Assisted Testing, Test Format, Decision Making
Kay, Alison E.; Hardy, Judy; Galloway, Ross K. – British Journal of Educational Technology, 2020
This study explores the relationship between engagement with an online, free-to-use question-generation application (PeerWise) and student achievement. Using PeerWise, students can create and answer multiple-choice questions and can provide feedback to the question authors on question quality. This provides further scope for students to engage in…
Descriptors: Computer Assisted Testing, Multiple Choice Tests, Academic Achievement, Feedback (Response)
Friyatmi; Mardapi, Djemari; Haryanto; Rahmi, Elvi – European Journal of Educational Research, 2020
The advancement of information and technology resulted in the change in conventional test methods. The weaknesses of the paper-based test can be minimized using the computer-based test (CBT). The development of a CBT desperately needs a computerized item bank. This study aimed to develop a computerized item bank for classroom and school-based…
Descriptors: Computer Assisted Testing, Item Banks, High School Students, Foreign Countries
Pastor, Dena; Love, Paula – Intersection: A Journal at the Intersection of Assessment and Learning, 2020
For more than 30 years, James Madison University (JMU) has used Assessment Days to collect longitudinal data on student learning outcomes. Our model ensures that all incoming students are tested twice: once in August before beginning classes and again in February after accumulating 45-70 credit hours (Pastor, Foelber, Jacovidis, Fulcher &…
Descriptors: Student Evaluation, COVID-19, Pandemics, Evaluation Methods
Backes, Ben; Cowan, James – National Center for Analysis of Longitudinal Data in Education Research (CALDER), 2020
Prior work has documented a substantial penalty associated with taking the Partnership for Assessment of Readiness for College and Careers (PARCC) online relative to on paper (Backes & Cowan, 2019). However, this penalty does not necessarily make online tests less useful. For example, it could be the case that computer literacy skills are…
Descriptors: Predictive Validity, Test Validity, Computer Assisted Testing, Comparative Analysis
New York State Education Department, 2020
The New York State Education Department (NYSED) has a partnership with Questar Assessment Inc. (Questar) for the development of the 2020 Grades 3-8 Mathematics Tests. Teachers from across the State work with NYSED in a variety of activities to ensure the validity and reliability of the New York State Testing Program (NYSTP). The 2020 Grades 6-8…
Descriptors: Mathematics Tests, Computer Assisted Testing, Scheduling, Testing
New York State Education Department, 2020
The New York State Education Department (NYSED) has a partnership with Questar Assessment Inc. (Questar) for the development of the 2020 Grades 3-8 Mathematics Tests. Teachers from across the State work with NYSED in a variety of activities to ensure the validity and reliability of the New York State Testing Program (NYSTP). The 2020 Grades 3-5…
Descriptors: Mathematics Tests, Computer Assisted Testing, Scheduling, Testing
Priya Harindranathan – ProQuest LLC, 2020
A major problem faced by instructors post-implementation of unsupervised online assessments is that they may lack real-time access to the students' actual learning behaviors. Limitations in student-feedback, limited know-how of accessing and analyzing log data, and large class sizes could restrict instructors' access to learners' behaviors. This…
Descriptors: Tests, Supervision, Student Evaluation, Computer Assisted Testing
Myers, Matthew C.; Wilson, Joshua – International Journal of Artificial Intelligence in Education, 2023
This study evaluated the construct validity of six scoring traits of an automated writing evaluation (AWE) system called "MI Write." Persuasive essays (N = 100) written by students in grades 7 and 8 were randomized at the sentence-level using a script written with Python's NLTK module. Each persuasive essay was randomized 30 times (n =…
Descriptors: Construct Validity, Automation, Writing Evaluation, Algorithms
Isaacs, Talia; Hu, Ruolin; Trenkic, Danijela; Varga, Julia – Language Testing, 2023
The COVID-19 pandemic has changed the university admissions and proficiency testing landscape. One change has been the meteoric rise in use of the fully automated Duolingo English Test (DET) for university entrance purposes, offering test-takers a cheaper, shorter, accessible alternative. This rapid response study is the first to investigate the…
Descriptors: Predictive Validity, Educational Technology, Handheld Devices, Language Tests
Coetzee, Stephen A.; Schmulian, Astrid; Janse van Rensburg, Cecile – Accounting Education, 2023
As a result of containment measures implemented during COVID-19, the authors needed to re-envision and restructure in-person assessments for learning that provided immediate peer feedback to students in their competency-based financial reporting course. Peer feedback is crucial in competency-based education, as mastering a competency necessitates…
Descriptors: Computer Mediated Communication, Peer Evaluation, Feedback (Response), Computer Assisted Testing
Siyuan Shao – SAGE Open, 2023
After the outbreak of the COVID-19 pandemic, distance teaching brought on unforeseen challenges around the world, including classroom-based assessment practice. However, little attention has been paid to teachers' assessment practice and their identity as assessor in the new teaching situation. This study examines Chinese K-12 in-service EFL…
Descriptors: Kindergarten, Elementary Secondary Education, Computer Assisted Testing, Foreign Countries
Sangsuwan, Wiramon; Rukthong, Anchana – LEARN Journal: Language Education and Acquisition Research Network, 2023
A direct test of English speaking is important to evaluate what learners can do in real-life situations. However, due to challenges in test administration, especially with a large number of test-takers, a direct speaking test may not be feasible in many contexts and thus indirect tests, such as conversational cloze tests, are mainly used. In…
Descriptors: Language Tests, Speech Communication, Computer Assisted Testing, English (Second Language)