NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 91 to 105 of 632 results Save | Export
MacMahon, Alyssa Leandra – ProQuest LLC, 2023
Computer-based technological tools can be an efficient and effective way to enhance mathematics classroom activities like formative assessment. Whilst there are a number of theoretical frameworks and standards to help pre- and in-service teachers understand the individual complexities of mathematics pedagogy, technology integration, and formative…
Descriptors: Mathematics Education, Technology Uses in Education, Evaluation Methods, Formative Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Parisa Aqdas Karimi; Seyyed Kazem Banihashem; Harm J. A. Biemans – International Journal of Technology in Education and Science, 2023
The current paper focusses on the teachers' attitude towards and experiences with e-learning tools at two universities in different phases of e-learning implementation. The study population comprises teachers at university level and a simple random sampling method was used. A total of 45 teachers in bachelor programmes from the Faculty of…
Descriptors: College Faculty, Teacher Attitudes, Electronic Learning, Technology Uses in Education
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Chen, Lei; Zechner, Klaus; Yoon, Su-Youn; Evanini, Keelan; Wang, Xinhao; Loukina, Anatassia; Tap, Jidong; Davis, Lawrence; Lee, Chong Min; Ma, Min; Mundowsky, Robert; Lu, Chi; Leong, Chee Wee; Gyawali, Binod – ETS Research Report Series, 2018
This research report provides an overview of the R&D efforts at Educational Testing Service related to its capability for automated scoring of nonnative spontaneous speech with the "SpeechRater"? automated scoring service since its initial version was deployed in 2006. While most aspects of this R&D work have been published in…
Descriptors: Computer Assisted Testing, Scoring, Test Scoring Machines, Speech Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Chen, Dandan; Hebert, Michael; Wilson, Joshua – American Educational Research Journal, 2022
We used multivariate generalizability theory to examine the reliability of hand-scoring and automated essay scoring (AES) and to identify how these scoring methods could be used in conjunction to optimize writing assessment. Students (n = 113) included subsamples of struggling writers and non-struggling writers in Grades 3-5 drawn from a larger…
Descriptors: Reliability, Scoring, Essays, Automation
Klein, Michael – ProQuest LLC, 2019
The purpose of the current study was to examine the differences between number and types of administration and scoring errors made by administration method (digital/Q-Interactive vs. paper-and-pencil) on the Wechsler Intelligence Scales for Children (WISC-V). WISC-V administration and scoring checklists were developed in order to provide an…
Descriptors: Intelligence Tests, Children, Test Format, Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lu, Chang; Cutumisu, Maria – International Educational Data Mining Society, 2021
Digitalization and automation of test administration, score reporting, and feedback provision have the potential to benefit large-scale and formative assessments. Many studies on automated essay scoring (AES) and feedback generation systems were published in the last decade, but few connected AES and feedback generation within a unified framework.…
Descriptors: Learning Processes, Automation, Computer Assisted Testing, Scoring
Wood, Scott; Yao, Erin; Haisfield, Lisa; Lottridge, Susan – ACT, Inc., 2021
For assessment professionals who are also automated scoring (AS) professionals, there is no single set of standards of best practice. This paper reviews the assessment and AS literature to identify key standards of best practice and ethical behavior for AS professionals and codifies those standards in a single resource. Having a unified set of AS…
Descriptors: Standards, Best Practices, Computer Assisted Testing, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Lottridge, Sue; Burkhardt, Amy; Boyer, Michelle – Educational Measurement: Issues and Practice, 2020
In this digital ITEMS module, Dr. Sue Lottridge, Amy Burkhardt, and Dr. Michelle Boyer provide an overview of automated scoring. Automated scoring is the use of computer algorithms to score unconstrained open-ended test items by mimicking human scoring. The use of automated scoring is increasing in educational assessment programs because it allows…
Descriptors: Computer Assisted Testing, Scoring, Automation, Educational Assessment
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ally, Said; Oreku, George – International Journal of Education and Development using Information and Communication Technology, 2022
The outbreak of the COVID-19 pandemic largely disrupted the continuity of educational delivery. Online learning was the prompt response by educators. However, this comes with a big question on the conduct of assessment. Running examinations traditionally is vulnerable to high security risks and administration costs. A precise mechanism to…
Descriptors: COVID-19, Pandemics, Electronic Learning, Information Systems
Peer reviewed Peer reviewed
Direct linkDirect link
Clements, Douglas H.; Banse, Holland; Sarama, Julie; Tatsuoka, Curtis; Joswick, Candace; Hudyma, Aaron; Van Dine, Douglas W.; Tatsuoka, Kikumi K. – Mathematical Thinking and Learning: An International Journal, 2022
Researchers often develop instruments using correctness scores (and a variety of theories and techniques, such as Item Response Theory) for validation and scoring. Less frequently, observations of children's strategies are incorporated into the design, development, and application of assessments. We conducted individual interviews of 833…
Descriptors: Item Response Theory, Computer Assisted Testing, Test Items, Mathematics Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Yerushalmy, Michal; Olsher, Shai – ZDM: The International Journal on Mathematics Education, 2020
We argue that examples can do more than serve the purpose of illustrating the truth of an existential statement or disconfirming the truth of a universal statement. Our argument is relevant to the use of technology in classroom assessment. A central challenge of computer-assisted assessment is to develop ways of collecting rich and complex data…
Descriptors: Computer Assisted Testing, Student Evaluation, Problem Solving, Thinking Skills
Peer reviewed Peer reviewed
Direct linkDirect link
Selcuk Acar; Denis Dumas; Peter Organisciak; Kelly Berthiaume – Grantee Submission, 2024
Creativity is highly valued in both education and the workforce, but assessing and developing creativity can be difficult without psychometrically robust and affordable tools. The open-ended nature of creativity assessments has made them difficult to score, expensive, often imprecise, and therefore impractical for school- or district-wide use. To…
Descriptors: Thinking Skills, Elementary School Students, Artificial Intelligence, Measurement Techniques
Peer reviewed Peer reviewed
Direct linkDirect link
Pfordresher, Peter Q.; Demorest, Steven M. – Journal of Research in Music Education, 2021
The purpose of this study was to analyze a large sample of volunteers from the general population who were tested with an identical online measure of singing accuracy. A sample of 632 participants completed the Seattle Singing Accuracy Protocol (SSAP), a standardized measure of singing accuracy, available online, that includes a test of pitch…
Descriptors: Correlation, Accuracy, Singing, Computer Assisted Testing
Seth Brown – ProQuest LLC, 2021
Over past decade, districts throughout the United States have reformed their teacher evaluation systems in order to support teachers to improve teacher practices and to improve the composition of teachers in the workforce. Despite the widespread reform of evaluation systems, little research has examined the impacts these systems have on teacher…
Descriptors: Teacher Evaluation, Teacher Effectiveness, Instructional Effectiveness, Instructional Improvement
Peer reviewed Peer reviewed
Direct linkDirect link
Myers, Matthew C.; Wilson, Joshua – International Journal of Artificial Intelligence in Education, 2023
This study evaluated the construct validity of six scoring traits of an automated writing evaluation (AWE) system called "MI Write." Persuasive essays (N = 100) written by students in grades 7 and 8 were randomized at the sentence-level using a script written with Python's NLTK module. Each persuasive essay was randomized 30 times (n =…
Descriptors: Construct Validity, Automation, Writing Evaluation, Algorithms
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  43