Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 9 |
Descriptor
Source
Journal of Technology,… | 10 |
Author
Attali, Yigal | 2 |
Bridgeman, Brent | 1 |
Brown, Michelle Stallone | 1 |
Burstein, Jill | 1 |
Cassady, Jerrell C. | 1 |
Chung, Gregory K. W. K. | 1 |
Dexter, Sara | 1 |
Dikli, Semire | 1 |
Drake, Samuel | 1 |
Garcia, Veronica | 1 |
Gifford, Bernard | 1 |
More ▼ |
Publication Type
Journal Articles | 10 |
Reports - Research | 5 |
Reports - Descriptive | 3 |
Reports - Evaluative | 2 |
Education Level
Higher Education | 10 |
Postsecondary Education | 7 |
Elementary Secondary Education | 2 |
Audience
Location
California | 1 |
Texas | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Graduate Record Examinations | 2 |
Graduate Management Admission… | 1 |
Test of English as a Foreign… | 1 |
What Works Clearinghouse Rating
Attali, Yigal; Bridgeman, Brent; Trapani, Catherine – Journal of Technology, Learning, and Assessment, 2010
A generic approach in automated essay scoring produces scores that have the same meaning across all prompts, existing or new, of a writing assessment. This is accomplished by using a single set of linguistic indicators (or features), a consistent way of combining and weighting these features into essay scores, and a focus on features that are not…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Test Scoring Machines
Wang, Jinhao; Brown, Michelle Stallone – Journal of Technology, Learning, and Assessment, 2007
The current research was conducted to investigate the validity of automated essay scoring (AES) by comparing group mean scores assigned by an AES tool, IntelliMetric [TM] and human raters. Data collection included administering the Texas version of the WriterPlacer "Plus" test and obtaining scores assigned by IntelliMetric [TM] and by…
Descriptors: Test Scoring Machines, Scoring, Comparative Testing, Intermode Differences
Gu, Lixiong; Drake, Samuel; Wolfe, Edward W. – Journal of Technology, Learning, and Assessment, 2006
This study seeks to determine whether item features are related to observed differences in item difficulty (DIF) between computer- and paper-based test delivery media. Examinees responded to 60 quantitative items similar to those found on the GRE general test in either a computer-based or paper-based medium. Thirty-eight percent of the items were…
Descriptors: Test Bias, Test Items, Educational Testing, Student Evaluation
Cassady, Jerrell C.; Gridley, Betty E. – Journal of Technology, Learning, and Assessment, 2005
This study analyzed the effects of online formative and summative assessment materials on undergraduates' experiences with attention to learners' testing behaviors (e.g., performance, study habits) and beliefs (e.g., test anxiety, perceived test threat). The results revealed no detriment to students' perceptions of tests or performances on tests…
Descriptors: Study Habits, Student Attitudes, Formative Evaluation, Testing
Scalise, Kathleen; Gifford, Bernard – Journal of Technology, Learning, and Assessment, 2006
Technology today offers many new opportunities for innovation in educational assessment through rich new assessment tasks and potentially powerful scoring, reporting and real-time feedback mechanisms. One potential limitation for realizing the benefits of computer-based assessment in both instructional assessment and large scale testing comes in…
Descriptors: Electronic Learning, Educational Assessment, Information Technology, Classification
Dikli, Semire – Journal of Technology, Learning, and Assessment, 2006
Automated Essay Scoring (AES) is defined as the computer technology that evaluates and scores the written prose (Shermis & Barrera, 2002; Shermis & Burstein, 2003; Shermis, Raymat, & Barrera, 2003). AES systems are mainly used to overcome time, cost, reliability, and generalizability issues in writing assessment (Bereiter, 2003; Burstein,…
Descriptors: Scoring, Writing Evaluation, Writing Tests, Standardized Tests
Attali, Yigal; Burstein, Jill – Journal of Technology, Learning, and Assessment, 2006
E-rater[R] has been used by the Educational Testing Service for automated essay scoring since 1999. This paper describes a new version of e-rater (V.2) that is different from other automated essay scoring systems in several important respects. The main innovations of e-rater V.2 are a small, intuitive, and meaningful set of features used for…
Descriptors: Educational Testing, Test Scoring Machines, Scoring, Writing Evaluation
Scharber, Cassandra; Dexter, Sara; Riedel, Eric – Journal of Technology, Learning, and Assessment, 2008
The purpose of this research is to analyze preservice teachers' use of and reactions to an automated essay scorer used within an online, case-based learning environment called ETIPS. Data analyzed include post-assignment surveys, a user log of students' actions within the cases, instructor-assigned scores on final essays, and interviews with four…
Descriptors: Test Scoring Machines, Essays, Student Experience, Automation
Rudner, Lawrence M.; Garcia, Veronica; Welch, Catherine – Journal of Technology, Learning, and Assessment, 2006
This report provides a two-part evaluation of the IntelliMetric[SM] automated essay scoring system based on its performance scoring essays from the Analytic Writing Assessment of the Graduate Management Admission Test[TM] (GMAT[TM]). The IntelliMetric system performance is first compared to that of individual human raters, a Bayesian system…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Essays
Chung, Gregory K. W. K.; Shel, Tammy; Kaiser, William J. – Journal of Technology, Learning, and Assessment, 2006
We examined a novel formative assessment and instructional approach with 89 students in three electrical engineering classes in special computer-based discussion sections. The technique involved students individually solving circuit problems online, with their real-time responses observed by the instructor. While exploratory, survey and…
Descriptors: Student Problems, Formative Evaluation, Engineering, Engineering Education