NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1,756 to 1,770 of 7,222 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Kuhfeld, Megan; Soland, James – Journal of Research on Educational Effectiveness, 2020
Educational stakeholders have long known that students might not be fully engaged when taking an achievement test and that such disengagement could undermine the inferences drawn from observed scores. Thanks to the growing prevalence of computer-based tests and the new forms of metadata they produce, researchers have developed and validated…
Descriptors: Metadata, Computer Assisted Testing, Achievement Tests, Reaction Time
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Whitelock, Denise; Edwards, Chris; Okada, Alexandra – Journal of Learning for Development, 2020
The EU-funded TeSLA project -- Adaptive Trust-based e-Assessment System for Learning (http://tesla-project.eu) -- has developed a suite of instruments for e-Authentication. These include face recognition, voice recognition, keystroke dynamics, forensic analysis and plagiarism detection, which were designed for integration within a university's…
Descriptors: Computer Security, Electronic Learning, Student Attitudes, Teacher Attitudes
Council of Chief State School Officers, 2020
Any body of research evolves over time. Previous understandings become more nuanced, ideas are supported or rebuked, and, eventually we arrive at a clearer view of the issue. The research on score comparability across computerized devices is no exception. CCSSO [Council of Chief State School Officers] and the Center for Assessment have published…
Descriptors: Computer Assisted Testing, Scores, Intermode Differences, Influence of Technology
Goodwin, Amanda P.; Petscher, Yaacov; Jones, Sara; McFadden, Sara; Reynolds, Dan; Lantos, Tess – Grantee Submission, 2020
The authors describe Monster, PI, which is an app-based, gamified assessment that measures language skills (knowledge of morphology, vocabulary, and syntax) of students in grades 5-8 and provides teachers with interpretable score reports to drive instruction that improves vocabulary, reading, and writing ability. Specifically, the authors describe…
Descriptors: Computer Assisted Testing, Handheld Devices, Language Maintenance, Language Tests
New York State Education Department, 2020
The New York State Education Department (NYSED) has a partnership with Questar Assessment Inc. (Questar) for the development of the 2020 Grades 3-8 English Language Arts Tests. Teachers from across the State work with NYSED in a variety of activities to ensure the validity and reliability of the New York State Testing Program (NYSTP). The 2020…
Descriptors: Testing Programs, Language Arts, Language Tests, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Ming Ming Chiu; Chi Keung Woo; Alice Shiu; Yun Liu; Bonnie X. Luo – International Journal of Comparative Education and Development, 2020
Purpose: A team member might exert little effort and exploit teammates' work (free riding), which can discourage their efforts. The purpose of this paper is to examine whether free riding devalues team projects and whether an online assessment system for individual scores (OASIS) system can reduce student perceptions of free riding and its harmful…
Descriptors: Cooperative Learning, Postsecondary Education, Teamwork, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Chen, Yi-Jui I.; Chen, Yi-Hsin; Anthony, Jason L.; Erazo, Noé A. – Journal of Psychoeducational Assessment, 2022
The Computer-based Orthographic Processing Assessment (COPA) is a newly developed assessment to measure orthographic processing skills, including rapid perception, access, differentiation, correction, and arrangement. In this study, cognitive diagnostic models were used to test if the dimensionality of the COPA conforms to theoretical expectation,…
Descriptors: Elementary School Students, Grade 2, Computer Assisted Testing, Orthographic Symbols
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sa'di, Rami A.; Sharadgah, Talha A.; Abdulrazzaq, Ahmad; Yaseen, Maha S. – Electronic Journal of e-Learning, 2022
As the COVID-19 pandemic was spreading rapidly throughout the world, the most widespread reaction in many countries to curtail the disease was lockdown. As a result, educational institutions had to find an alternative to face-to-face learning. The most obvious solution was e-learning. Conventional tertiary institutions with little virtual learning…
Descriptors: COVID-19, Pandemics, Postsecondary Education, Electronic Learning
Cooper, Damian – Solution Tree, 2022
Assessment is overdue for a technology-supported reboot, and this practical guide will help you do just that. Within its pages, you'll discover a balanced approach to assessment for learning that includes conversations and performance-based observations as key components. Real-world case studies and differentiated implementation options are…
Descriptors: Evaluation Methods, Student Evaluation, Performance Based Assessment, Learner Engagement
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sumer S. Abou Shaaban – International Education Studies, 2022
This paper exposed TEFL basic schoolteachers' reflections on the use of Google Classroom amid the COVID-19 pandemic. It is a qualitative descriptive field research that tackled 82 TEFL teachers who responded to a reflection instrument which includes four questions tackled: (1) demographic general information, (2) uses of google classroom, (3)…
Descriptors: Language Teachers, English (Second Language), Second Language Instruction, COVID-19
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Uzun, Kutay – Contemporary Educational Technology, 2018
Managing crowded classes in terms of classroom assessment is a difficult task due to the amount of time which needs to be devoted to providing feedback to student products. In this respect, the present study aimed to develop an automated essay scoring environment as a potential means to overcome this problem. Secondarily, the study aimed to test…
Descriptors: Computer Assisted Testing, Essays, Scoring, English Literature
Peer reviewed Peer reviewed
Direct linkDirect link
O'Leary, Michael; Scully, Darina; Karakolidis, Anastasios; Pitsia, Vasiliki – European Journal of Education, 2018
The role of digital technology in assessment has received a great deal of attention in recent years. Naturally, technology offers many practical benefits, such as increased efficiency with regard to the design, implementation and scoring of existing assessments. More importantly, it also has the potential to have profound, transformative effects…
Descriptors: Computer Assisted Testing, Educational Technology, Technology Uses in Education, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Passonneau, Rebecca J.; Poddar, Ananya; Gite, Gaurav; Krivokapic, Alisa; Yang, Qian; Perin, Dolores – International Journal of Artificial Intelligence in Education, 2018
Development of reliable rubrics for educational intervention studies that address reading and writing skills is labor-intensive, and could benefit from an automated approach. We compare a main ideas rubric used in a successful writing intervention study to a highly reliable wise-crowd content assessment method developed to evaluate…
Descriptors: Computer Assisted Testing, Writing Evaluation, Content Analysis, Scoring Rubrics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Goodwin, Amanda P.; Petscher, Yaacov; Reynolds, Dan; Lantos, Tess; Gould, Sara; Tock, Jamie – Education Sciences, 2018
The history of vocabulary research has specified a rich and complex construct, resulting in calls for vocabulary research, assessment, and instruction to take into account the complex problem space of vocabulary. At the intersection of vocabulary theory and assessment modeling, this paper suggests a suite of modeling techniques that model the…
Descriptors: Factor Analysis, Correlation, Language Tests, Standardized Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Aksu Dunya, Beyza – International Journal of Testing, 2018
This study was conducted to analyze potential item parameter drift (IPD) impact on person ability estimates and classification accuracy when drift affects an examinee subgroup. Using a series of simulations, three factors were manipulated: (a) percentage of IPD items in the CAT exam, (b) percentage of examinees affected by IPD, and (c) item pool…
Descriptors: Adaptive Testing, Classification, Accuracy, Computer Assisted Testing
Pages: 1  |  ...  |  114  |  115  |  116  |  117  |  118  |  119  |  120  |  121  |  122  |  ...  |  482