NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 8 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Gardner, Ronald L.; Stephens, Tammy L. – Preventing School Failure, 2023
Despite knowledge of COVID-19's expected impact on the 2020 and 2021 academic school years, policymakers, professional organizations, and test publishers have failed to offer consistent, well-defined or corresponding advice to educational evaluators on how to meet the unique challenges the pandemic has introduced. The directive vacuum that was…
Descriptors: Distance Education, Computer Assisted Testing, Student Evaluation, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
DiCerbo, Kristen E. – Journal of Applied Testing Technology, 2017
While game-based assessment offers new potential for understanding the processes students use to solve problems, it also presents new challenges in uncovering which player actions provide evidence that contributes to understanding about students' knowledge, skill, and attributes that we are interested in assessing. A development process that…
Descriptors: Educational Games, Evaluation Methods, Educational Technology, Technology Uses in Education
Peer reviewed Peer reviewed
Direct linkDirect link
Babu, Rakesh; Singh, Rahul – Journal of Information Technology Education: Research, 2013
This paper presents a novel task-oriented, user-centered, multi-method evaluation (TUME) technique and shows how it is useful in providing a more complete, practical and solution-oriented assessment of the accessibility and usability of Learning Management Systems (LMS) for blind and visually impaired (BVI) students. Novel components of TUME…
Descriptors: Integrated Learning Systems, Blindness, Visual Impairments, Accessibility (for Disabled)
Lekwa, Adam Jens – ProQuest LLC, 2012
This paper reports the results of a descriptive study on the use of a technology-enhanced formative assessment system called Accelerated Math (AM) for ELLs and their native-English-speaking (NES) peers. It was comprised of analyses of an extant database of 18,549 students, including 2,057 ELLs, from grades 1 through 8 across 30 U.S. states. These…
Descriptors: Formative Evaluation, Computer Assisted Testing, Grade 1, Grade 2
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Regional Educational Laboratory Southeast, 2008
What are other states doing to assess whether K-12 teachers are meeting the technology proficiency standards (NET) outlined by ISTE (International Society for Technology in Education)? What are alternatives to the IC3 test? This question was answered by contacting state leaders and also by finding information on the Internet. Many states have an…
Descriptors: Educational Technology, Elementary Secondary Education, Technology Integration, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Jinhao; Brown, Michelle Stallone – Contemporary Issues in Technology and Teacher Education (CITE Journal), 2008
The purpose of the current study was to analyze the relationship between automated essay scoring (AES) and human scoring in order to determine the validity and usefulness of AES for large-scale placement tests. Specifically, a correlational research design was used to examine the correlations between AES performance and human raters' performance.…
Descriptors: Scoring, Essays, Computer Assisted Testing, Sentence Structure
Peer reviewed Peer reviewed
Direct linkDirect link
McPherson, Douglas – Interactive Technology and Smart Education, 2009
Purpose: The purpose of this paper is to describe how and why Texas A&M University at Qatar (TAMUQ) has developed a system aiming to effectively place students in freshman and developmental English programs. The placement system includes: triangulating data from external test scores, with scores from a panel-marked hand-written essay (HWE),…
Descriptors: Student Placement, Educational Testing, English (Second Language), Second Language Instruction
Wang, Jinhao; Brown, Michelle Stallone – Journal of Technology, Learning, and Assessment, 2007
The current research was conducted to investigate the validity of automated essay scoring (AES) by comparing group mean scores assigned by an AES tool, IntelliMetric [TM] and human raters. Data collection included administering the Texas version of the WriterPlacer "Plus" test and obtaining scores assigned by IntelliMetric [TM] and by…
Descriptors: Test Scoring Machines, Scoring, Comparative Testing, Intermode Differences