Publication Date
| In 2026 | 0 |
| Since 2025 | 14 |
| Since 2022 (last 5 years) | 112 |
| Since 2017 (last 10 years) | 254 |
| Since 2007 (last 20 years) | 423 |
Descriptor
| Computer Assisted Testing | 632 |
| Scoring | 511 |
| Test Construction | 120 |
| Test Items | 120 |
| Foreign Countries | 115 |
| Evaluation Methods | 106 |
| Automation | 97 |
| Scoring Rubrics | 96 |
| Essays | 90 |
| Student Evaluation | 90 |
| Scores | 89 |
| More ▼ | |
Source
Author
Publication Type
Education Level
Location
| Australia | 13 |
| China | 12 |
| New York | 9 |
| Japan | 8 |
| Canada | 7 |
| Netherlands | 7 |
| Germany | 6 |
| Iran | 6 |
| Taiwan | 6 |
| United Kingdom | 6 |
| Spain | 5 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Nese, Joseph F. T.; Kahn, Josh; Kamata, Akihito – Grantee Submission, 2017
Despite prevalent use and practical application, the current and standard assessment of oral reading fluency (ORF) presents considerable limitations which reduces its validity in estimating growth and monitoring student progress, including: (a) high cost of implementation; (b) tenuous passage equivalence; and (c) bias, large standard error, and…
Descriptors: Automation, Speech, Recognition (Psychology), Scores
Keller-Margulis, Milena A.; Mercer, Sterett H.; Payan, Anita; McGee, Wendy – School Psychology Quarterly, 2015
The purpose of this study was to examine annual growth patterns and gender differences in written expression curriculum-based measurement (WE-CBM) when used in the context of universal screening. Students in second through fifth grade (n = 672) from 2 elementary schools that used WE-CBM as a universal screener participated in the study. Student…
Descriptors: Gender Differences, Curriculum Based Assessment, Elementary School Students, Writing Skills
Bennett, Randy Elliot – Review of Research in Education, 2015
On the surface, this chapter concerns the evolution of educational assessment from a paper-based technology to an electronic one. On a deeper level, that evolution is more substantive. In the first section of this chapter, those stages are briefly described and used to place the new generation of assessments being created by the two comprehensive…
Descriptors: Educational Assessment, Electronic Learning, State Standards, Academic Standards
Newhouse, C. Paul; Tarricone, Pina – Canadian Journal of Learning and Technology, 2014
High-stakes external assessment for practical courses is fraught with problems impacting on the manageability, validity and reliability of scoring. Alternative approaches to assessment using digital technologies have the potential to address these problems. This paper describes a study that investigated the use of these technologies to create and…
Descriptors: High Stakes Tests, Student Evaluation, Evaluation Methods, Scoring
Tarricone, Pina; Newhouse, C. Paul – Australian Educational Researcher, 2016
Traditional moderation of student assessments is often carried out with groups of teachers working face-to-face in a specified location making judgements concerning the quality of representations of achievement. This traditional model has relied little on modern information communications technologies and has been logistically challenging. We…
Descriptors: Visual Arts, Art Education, Art Materials, Alternative Assessment
Wolf, Mikyung Kim; Guzman-Orth, Danielle; Lopez, Alexis; Castellano, Katherine; Himelfarb, Igor; Tsutagawa, Fred S. – Educational Assessment, 2016
This article investigates ways to improve the assessment of English learner students' English language proficiency given the current movement of creating next-generation English language proficiency assessments in the Common Core era. In particular, this article discusses the integration of scaffolding strategies, which are prevalently utilized as…
Descriptors: English Language Learners, Scaffolding (Teaching Technique), Language Tests, Language Proficiency
Thomas, Ally – ProQuest LLC, 2016
With the advent of the newly developed Common Core State Standards and the Next Generation Science Standards, innovative assessments, including technology-enhanced items and tasks, will be needed to meet the challenges of developing valid and reliable assessments in a world of computer-based testing. In a recent critique of the next generation…
Descriptors: Technology Uses in Education, Evaluation Methods, Computer Assisted Testing, Educational Technology
Yu, Guoxing; Zhang, Jing – Language Assessment Quarterly, 2017
In this special issue on high-stakes English language testing in China, the two articles on computer-based testing (Jin & Yan; He & Min) highlight a number of consistent, ongoing challenges and concerns in the development and implementation of the nationwide IB-CET (Internet Based College English Test) and institutional computer-adaptive…
Descriptors: Foreign Countries, Computer Assisted Testing, English (Second Language), Language Tests
Römer, Ute – Language Testing, 2017
This paper aims to connect recent corpus research on phraseology with current language testing practice. It discusses how corpora and corpus-analytic techniques can illuminate central aspects of speech and help in conceptualizing the notion of lexicogrammar in second language speaking assessment. The description of speech and some of its core…
Descriptors: Language Tests, Grammar, English (Second Language), Second Language Learning
Gobert, Janice D.; Sao Pedro, Michael A. – Grantee Submission, 2017
In this chapter, we provide an overview of the design, data-collection, and data-analysis efforts for a digital learning and assessment environment for scientific inquiry / science practices called "Inq-ITS" ("I"nquiry "I"ntelligent "T"utoring "S"ystem; www.inqits.org). We first present a brief…
Descriptors: Educational Assessment, Electronic Learning, Science Process Skills, Intelligent Tutoring Systems
Buzick, Heather; Oliveri, Maria Elena; Attali, Yigal; Flor, Michael – Applied Measurement in Education, 2016
Automated essay scoring is a developing technology that can provide efficient scoring of large numbers of written responses. Its use in higher education admissions testing provides an opportunity to collect validity and fairness evidence to support current uses and inform its emergence in other areas such as K-12 large-scale assessment. In this…
Descriptors: Essays, Learning Disabilities, Attention Deficit Hyperactivity Disorder, Scoring
Allen, Laura K.; Jacovina, Matthew E.; McNamara, Danielle S. – Grantee Submission, 2016
The development of strong writing skills is a critical (and somewhat obvious) goal within the classroom. Individuals across the world are now expected to reach a high level of writing proficiency to achieve success in both academic settings and the workplace (Geiser & Studley, 2001; Powell, 2009; Sharp, 2007). Unfortunately, strong writing…
Descriptors: Writing Skills, Writing Instruction, Writing Strategies, Teaching Methods
Razi, Salim – SAGE Open, 2015
Similarity reports of plagiarism detectors should be approached with caution as they may not be sufficient to support allegations of plagiarism. This study developed a 50-item rubric to simplify and standardize evaluation of academic papers. In the spring semester of 2011-2012 academic year, 161 freshmen's papers at the English Language Teaching…
Descriptors: Foreign Countries, Scoring Rubrics, Writing Evaluation, Writing (Composition)
Kahn, Josh; Nese, Joseph T.; Alonzo, Julie – Behavioral Research and Teaching, 2016
There is strong theoretical support for oral reading fluency (ORF) as an essential building block of reading proficiency. The current and standard ORF assessment procedure requires that students read aloud a grade-level passage (˜ 250 words) in a one-to-one administration, with the number of words read correctly in 60 seconds constituting their…
Descriptors: Teacher Surveys, Oral Reading, Reading Tests, Computer Assisted Testing
Kobayashi, Yuichiro; Abe, Mariko – Journal of Pan-Pacific Association of Applied Linguistics, 2016
The purpose of the present study is to assess second language (L2) spoken English using automated scoring techniques. Automated scoring aims to classify a large set of learners' oral performance data into a small number of discrete oral proficiency levels. In automated scoring, objectively measurable features such as the frequencies of lexical and…
Descriptors: Second Language Learning, Computer Assisted Testing, Scoring, Automation

Peer reviewed
Direct link
