Publication Date
In 2025 | 125 |
Since 2024 | 451 |
Since 2021 (last 5 years) | 1624 |
Since 2016 (last 10 years) | 2952 |
Since 2006 (last 20 years) | 4851 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
Practitioners | 181 |
Researchers | 145 |
Teachers | 120 |
Policymakers | 37 |
Administrators | 36 |
Students | 15 |
Counselors | 9 |
Parents | 4 |
Media Staff | 3 |
Support Staff | 3 |
Location
Australia | 166 |
United Kingdom | 152 |
Turkey | 124 |
China | 114 |
Germany | 107 |
Canada | 105 |
Spain | 91 |
Taiwan | 88 |
Netherlands | 72 |
Iran | 68 |
United States | 67 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Meets WWC Standards without Reservations | 4 |
Meets WWC Standards with or without Reservations | 4 |
Does not meet standards | 5 |
Zeng, Ji; Yin, Ping; Shedden, Kerby A. – Educational and Psychological Measurement, 2015
This article provides a brief overview and comparison of three matching approaches in forming comparable groups for a study comparing test administration modes (i.e., computer-based tests [CBT] and paper-and-pencil tests [PPT]): (a) a propensity score matching approach proposed in this article, (b) the propensity score matching approach used by…
Descriptors: Comparative Analysis, Computer Assisted Testing, Probability, Classification
Haug, Tobias – Deafness and Education International, 2015
Sign language test development is a relatively new field within sign linguistics, motivated by the practical need for assessment instruments to evaluate language development in different groups of learners (L1, L2). Due to the lack of research on the structure and acquisition of many sign languages, developing an assessment instrument poses…
Descriptors: Sign Language, Information Technology, Language Tests, Online Surveys
Jayashankar, Shailaja; Sridaran, R. – Education and Information Technologies, 2017
Teachers are thrown open to abundance of free text answers which are very daunting to read and evaluate. Automatic assessments of open ended answers have been attempted in the past but none guarantees 100% accuracy. In order to deal with the overload involved in this manual evaluation, a new tool becomes necessary. The unique superlative model…
Descriptors: Word Frequency, Models, Electronic Learning, Student Evaluation
Ockey, Gary J.; Gu, Lin; Keehner, Madeleine – Language Assessment Quarterly, 2017
A growing number of stakeholders argue for the use of second language (L2) speaking assessments that measure the ability to orally communicate in real time. A Web-based virtual environment (VE) that allows live voice communication among individuals may have potential for aiding in delivering such assessments. While off-the-shelf voice…
Descriptors: Speech Communication, Language Tests, Computer Assisted Testing, Computer Simulation
Wise, Steven L. – Educational Measurement: Issues and Practice, 2017
The rise of computer-based testing has brought with it the capability to measure more aspects of a test event than simply the answers selected or constructed by the test taker. One behavior that has drawn much research interest is the time test takers spend responding to individual multiple-choice items. In particular, very short response…
Descriptors: Guessing (Tests), Multiple Choice Tests, Test Items, Reaction Time
Zimmermann, Laura; Moser, Alecia; Lee, Herietta; Gerhardstein, Peter; Barr, Rachel – Child Development, 2017
This study examined the effect of a "ghost" demonstration on toddlers' imitation. In the "ghost" condition, virtual pieces moved to make a fish or boat puzzle. Fifty-two 2.5- and 3-year-olds were tested on a touchscreen (no transfer) or with 3D pieces (transfer); children tested with 3D pieces scored above a no demonstration…
Descriptors: Toddlers, Imitation, Computer Assisted Testing, Performance
Bennett, Sue; Dawson, Phillip; Bearman, Margaret; Molloy, Elizabeth; Boud, David – British Journal of Educational Technology, 2017
A wide range of technologies has been developed to enhance assessment, but adoption has been inconsistent. This is despite assessment being critical to student learning and certification. To understand why this is the case and how it can be addressed, we need to explore the perspectives of academics responsible for designing and implementing…
Descriptors: College Faculty, Computer Assisted Testing, Teacher Attitudes, Interviews
Wang, Keyin – ProQuest LLC, 2017
The comparison of item-level computerized adaptive testing (CAT) and multistage adaptive testing (MST) has been researched extensively (e.g., Kim & Plake, 1993; Luecht et al., 1996; Patsula, 1999; Jodoin, 2003; Hambleton & Xing, 2006; Keng, 2008; Zheng, 2012). Various CAT and MST designs have been investigated and compared under the same…
Descriptors: Comparative Analysis, Computer Assisted Testing, Adaptive Testing, Test Items
Domagala, Jospeh F. – ProQuest LLC, 2017
Teachers in the classroom need to be able to modify instruction in a meaningful way that helps students to improve academically. School districts provide electronic assessment data which helps teachers identify areas for improvement within their instruction. Electronic performance assessment systems can be an effective means of assisting schools…
Descriptors: Elementary School Teachers, Computer Uses in Education, Educational Assessment, Computer Assisted Testing
Bukhari, Nurliyana – ProQuest LLC, 2017
In general, newer educational assessments are deemed more demanding challenges than students are currently prepared to face. Two types of factors may contribute to the test scores: (1) factors or dimensions that are of primary interest to the construct or test domain; and, (2) factors or dimensions that are irrelevant to the construct, causing…
Descriptors: Item Response Theory, Models, Psychometrics, Computer Simulation
Yasuno, Fumiko; Nishimura, Keiichi; Negami, Seiya; Namikawa, Yukihiko – International Journal for Technology in Mathematics Education, 2019
Our study is on developing mathematics items for Computer-Based Testing (CBT) using Tablet PC. These items are subject-based items using interactive dynamic objects. The purpose of this study is to obtain some suggestions for further tasks drawing on field test results for developed items. First, we clarified the role of the interactive dynamic…
Descriptors: Mathematics Instruction, Mathematics Tests, Test Items, Computer Assisted Testing
Tate, Tamara P.; Warschauer, Mark – Technology, Knowledge and Learning, 2019
The quality of students' writing skills continues to concern educators. Because writing is essential to success in both college and career, poor writing can have lifelong consequences. Writing is now primarily done digitally, but students receive limited explicit instruction in digital writing. This lack of instruction means that students fail to…
Descriptors: Writing Tests, Computer Assisted Testing, Writing Skills, Writing Processes
Zawoyski, Andrea; Ardoin, Scott P. – School Psychology Review, 2019
Reading comprehension assessments often include multiple-choice (MC) questions, but some researchers doubt their validity in measuring comprehension. Consequently, new assessments may include more short-answer (SA) questions. The current study contributes to the research comparing MC and SA questions by evaluating the effects of anticipated…
Descriptors: Eye Movements, Elementary School Students, Children, Test Format
Castro, Connie J.; Viezel, Kathleen; Dumont, Ron; Guiney, Meaghan – Journal of Psychoeducational Assessment, 2019
This study examined recent technological developments in cognitive assessment and how these developments impact children's test behavior. The study consisted of two groups: one tested with an iPad and another tested with the standard paper and pencil format of the Wechsler Intelligence Scale for Children (WISC-IV). Independent groups t tests…
Descriptors: Intelligence Tests, Children, Cognitive Ability, Child Behavior
Aouine, Amina; Mahdaoui, Latifa; Moccozet, Laurent – International Journal of Information and Learning Technology, 2019
Purpose: The purpose of this paper is to focus on assessing individuals' problems in learning groups/teams and should lead to the assessment of the group/team itself as a learning entity. Design/methodology/approach: In this paper, an extension of the IMS-Learning Design (IMS-LD) meta-model is proposed in order to support the assessment of…
Descriptors: Cooperative Learning, Electronic Learning, Scores, Models