Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 11 |
Descriptor
Computer Assisted Testing | 11 |
Student Evaluation | 11 |
Educational Technology | 10 |
Educational Testing | 6 |
Test Items | 5 |
Scoring | 4 |
Test Construction | 4 |
Test Content | 4 |
Test Format | 4 |
Writing Evaluation | 4 |
Writing Tests | 4 |
More ▼ |
Source
Journal of Technology,… | 11 |
Author
Cameto, Renee | 2 |
Haertel, Geneva | 2 |
Abell, Rosemary | 1 |
Allen, Nancy | 1 |
Almond, Patricia | 1 |
Attali, Yigal | 1 |
Barton, Karen | 1 |
Bechard, Sue | 1 |
Beddow, Peter | 1 |
Behrens, John T. | 1 |
Bennett, Randy E. | 1 |
More ▼ |
Publication Type
Journal Articles | 11 |
Reports - Research | 6 |
Reports - Descriptive | 3 |
Reports - Evaluative | 2 |
Education Level
Elementary Secondary Education | 5 |
Higher Education | 5 |
Postsecondary Education | 5 |
Elementary Education | 2 |
Grade 8 | 2 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
Graduate Management Admission… | 1 |
Graduate Record Examinations | 1 |
National Assessment of… | 1 |
What Works Clearinghouse Rating
Mislevy, Robert J.; Behrens, John T.; Bennett, Randy E.; Demark, Sarah F.; Frezzo, Dennis C.; Levy, Roy; Robinson, Daniel H.; Rutstein, Daisy Wise; Shute, Valerie J.; Stanley, Ken; Winters, Fielding I. – Journal of Technology, Learning, and Assessment, 2010
People use external knowledge representations (KRs) to identify, depict, transform, store, share, and archive information. Learning how to work with KRs is central to be-coming proficient in virtually every discipline. As such, KRs play central roles in curriculum, instruction, and assessment. We describe five key roles of KRs in assessment: (1)…
Descriptors: Student Evaluation, Educational Technology, Computer Networks, Knowledge Representation
Almond, Patricia; Winter, Phoebe; Cameto, Renee; Russell, Michael; Sato, Edynn; Clarke-Midura, Jody; Torres, Chloe; Haertel, Geneva; Dolan, Robert; Beddow, Peter; Lazarus, Sheryl – Journal of Technology, Learning, and Assessment, 2010
This paper represents one outcome from the "Invitational Research Symposium on Technology-Enabled and Universally Designed Assessments," which examined technology-enabled assessments (TEA) and universal design (UD) as they relate to students with disabilities (SWD). It was developed to stimulate research into TEAs designed to make tests…
Descriptors: Disabilities, Inferences, Computer Assisted Testing, Alternative Assessment
Bennett, Randy Elliot; Persky, Hilary; Weiss, Andy; Jenkins, Frank – Journal of Technology, Learning, and Assessment, 2010
This paper describes a study intended to demonstrate how an emerging skill, problem solving with technology, might be measured in the National Assessment of Educational Progress (NAEP). Two computer-delivered assessment scenarios were designed, one on solving science-related problems through electronic information search and the other on solving…
Descriptors: National Competency Tests, Problem Solving, Technology Uses in Education, Computer Assisted Testing
Bechard, Sue; Sheinker, Jan; Abell, Rosemary; Barton, Karen; Burling, Kelly; Camacho, Christopher; Cameto, Renee; Haertel, Geneva; Hansen, Eric; Johnstone, Chris; Kingston, Neal; Murray, Elizabeth; Parker, Caroline E.; Redfield, Doris; Tucker, Bill – Journal of Technology, Learning, and Assessment, 2010
This article represents one outcome from the "Invitational Research Symposium on Technology-Enabled and Universally Designed Assessments," which examined technology-enabled assessments (TEA) and universal design (UD) as they relate to students with disabilities (SWD). It was developed to stimulate research into TEAs designed to better understand…
Descriptors: Test Validity, Disabilities, Educational Change, Evaluation Methods
Scalise, Kathleen; Gifford, Bernard – Journal of Technology, Learning, and Assessment, 2006
Technology today offers many new opportunities for innovation in educational assessment through rich new assessment tasks and potentially powerful scoring, reporting and real-time feedback mechanisms. One potential limitation for realizing the benefits of computer-based assessment in both instructional assessment and large scale testing comes in…
Descriptors: Electronic Learning, Educational Assessment, Information Technology, Classification
Rudner, Lawrence M.; Garcia, Veronica; Welch, Catherine – Journal of Technology, Learning, and Assessment, 2006
This report provides a two-part evaluation of the IntelliMetric[SM] automated essay scoring system based on its performance scoring essays from the Analytic Writing Assessment of the Graduate Management Admission Test[TM] (GMAT[TM]). The IntelliMetric system performance is first compared to that of individual human raters, a Bayesian system…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Essays
Gu, Lixiong; Drake, Samuel; Wolfe, Edward W. – Journal of Technology, Learning, and Assessment, 2006
This study seeks to determine whether item features are related to observed differences in item difficulty (DIF) between computer- and paper-based test delivery media. Examinees responded to 60 quantitative items similar to those found on the GRE general test in either a computer-based or paper-based medium. Thirty-eight percent of the items were…
Descriptors: Test Bias, Test Items, Educational Testing, Student Evaluation
Dikli, Semire – Journal of Technology, Learning, and Assessment, 2006
Automated Essay Scoring (AES) is defined as the computer technology that evaluates and scores the written prose (Shermis & Barrera, 2002; Shermis & Burstein, 2003; Shermis, Raymat, & Barrera, 2003). AES systems are mainly used to overcome time, cost, reliability, and generalizability issues in writing assessment (Bereiter, 2003; Burstein,…
Descriptors: Scoring, Writing Evaluation, Writing Tests, Standardized Tests
Attali, Yigal; Burstein, Jill – Journal of Technology, Learning, and Assessment, 2006
E-rater[R] has been used by the Educational Testing Service for automated essay scoring since 1999. This paper describes a new version of e-rater (V.2) that is different from other automated essay scoring systems in several important respects. The main innovations of e-rater V.2 are a small, intuitive, and meaningful set of features used for…
Descriptors: Educational Testing, Test Scoring Machines, Scoring, Writing Evaluation
Johnson, Martin; Green, Sylvia – Journal of Technology, Learning, and Assessment, 2006
The transition from paper-based to computer-based assessment raises a number of important issues about how mode might affect children's performance and question answering strategies. In this project 104 eleven-year-olds were given two sets of matched mathematics questions, one set on-line and the other on paper. Facility values were analyzed to…
Descriptors: Student Attitudes, Computer Assisted Testing, Program Effectiveness, Elementary School Students
Horkay, Nancy; Bennett, Randy Elliott; Allen, Nancy; Kaplan, Bruce; Yan, Fred – Journal of Technology, Learning, and Assessment, 2006
This study investigated the comparability of scores for paper and computer versions of a writing test administered to eighth grade students. Two essay prompts were given on paper to a nationally representative sample as part of the 2002 main NAEP writing assessment. The same two essay prompts were subsequently administered on computer to a second…
Descriptors: Writing Evaluation, Writing Tests, Computer Assisted Testing, Program Effectiveness