NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 11 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Mislevy, Robert J.; Behrens, John T.; Bennett, Randy E.; Demark, Sarah F.; Frezzo, Dennis C.; Levy, Roy; Robinson, Daniel H.; Rutstein, Daisy Wise; Shute, Valerie J.; Stanley, Ken; Winters, Fielding I. – Journal of Technology, Learning, and Assessment, 2010
People use external knowledge representations (KRs) to identify, depict, transform, store, share, and archive information. Learning how to work with KRs is central to be-coming proficient in virtually every discipline. As such, KRs play central roles in curriculum, instruction, and assessment. We describe five key roles of KRs in assessment: (1)…
Descriptors: Student Evaluation, Educational Technology, Computer Networks, Knowledge Representation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Almond, Patricia; Winter, Phoebe; Cameto, Renee; Russell, Michael; Sato, Edynn; Clarke-Midura, Jody; Torres, Chloe; Haertel, Geneva; Dolan, Robert; Beddow, Peter; Lazarus, Sheryl – Journal of Technology, Learning, and Assessment, 2010
This paper represents one outcome from the "Invitational Research Symposium on Technology-Enabled and Universally Designed Assessments," which examined technology-enabled assessments (TEA) and universal design (UD) as they relate to students with disabilities (SWD). It was developed to stimulate research into TEAs designed to make tests…
Descriptors: Disabilities, Inferences, Computer Assisted Testing, Alternative Assessment
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bennett, Randy Elliot; Persky, Hilary; Weiss, Andy; Jenkins, Frank – Journal of Technology, Learning, and Assessment, 2010
This paper describes a study intended to demonstrate how an emerging skill, problem solving with technology, might be measured in the National Assessment of Educational Progress (NAEP). Two computer-delivered assessment scenarios were designed, one on solving science-related problems through electronic information search and the other on solving…
Descriptors: National Competency Tests, Problem Solving, Technology Uses in Education, Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bechard, Sue; Sheinker, Jan; Abell, Rosemary; Barton, Karen; Burling, Kelly; Camacho, Christopher; Cameto, Renee; Haertel, Geneva; Hansen, Eric; Johnstone, Chris; Kingston, Neal; Murray, Elizabeth; Parker, Caroline E.; Redfield, Doris; Tucker, Bill – Journal of Technology, Learning, and Assessment, 2010
This article represents one outcome from the "Invitational Research Symposium on Technology-Enabled and Universally Designed Assessments," which examined technology-enabled assessments (TEA) and universal design (UD) as they relate to students with disabilities (SWD). It was developed to stimulate research into TEAs designed to better understand…
Descriptors: Test Validity, Disabilities, Educational Change, Evaluation Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Scalise, Kathleen; Gifford, Bernard – Journal of Technology, Learning, and Assessment, 2006
Technology today offers many new opportunities for innovation in educational assessment through rich new assessment tasks and potentially powerful scoring, reporting and real-time feedback mechanisms. One potential limitation for realizing the benefits of computer-based assessment in both instructional assessment and large scale testing comes in…
Descriptors: Electronic Learning, Educational Assessment, Information Technology, Classification
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Rudner, Lawrence M.; Garcia, Veronica; Welch, Catherine – Journal of Technology, Learning, and Assessment, 2006
This report provides a two-part evaluation of the IntelliMetric[SM] automated essay scoring system based on its performance scoring essays from the Analytic Writing Assessment of the Graduate Management Admission Test[TM] (GMAT[TM]). The IntelliMetric system performance is first compared to that of individual human raters, a Bayesian system…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Essays
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Gu, Lixiong; Drake, Samuel; Wolfe, Edward W. – Journal of Technology, Learning, and Assessment, 2006
This study seeks to determine whether item features are related to observed differences in item difficulty (DIF) between computer- and paper-based test delivery media. Examinees responded to 60 quantitative items similar to those found on the GRE general test in either a computer-based or paper-based medium. Thirty-eight percent of the items were…
Descriptors: Test Bias, Test Items, Educational Testing, Student Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Dikli, Semire – Journal of Technology, Learning, and Assessment, 2006
Automated Essay Scoring (AES) is defined as the computer technology that evaluates and scores the written prose (Shermis & Barrera, 2002; Shermis & Burstein, 2003; Shermis, Raymat, & Barrera, 2003). AES systems are mainly used to overcome time, cost, reliability, and generalizability issues in writing assessment (Bereiter, 2003; Burstein,…
Descriptors: Scoring, Writing Evaluation, Writing Tests, Standardized Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Attali, Yigal; Burstein, Jill – Journal of Technology, Learning, and Assessment, 2006
E-rater[R] has been used by the Educational Testing Service for automated essay scoring since 1999. This paper describes a new version of e-rater (V.2) that is different from other automated essay scoring systems in several important respects. The main innovations of e-rater V.2 are a small, intuitive, and meaningful set of features used for…
Descriptors: Educational Testing, Test Scoring Machines, Scoring, Writing Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Johnson, Martin; Green, Sylvia – Journal of Technology, Learning, and Assessment, 2006
The transition from paper-based to computer-based assessment raises a number of important issues about how mode might affect children's performance and question answering strategies. In this project 104 eleven-year-olds were given two sets of matched mathematics questions, one set on-line and the other on paper. Facility values were analyzed to…
Descriptors: Student Attitudes, Computer Assisted Testing, Program Effectiveness, Elementary School Students
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Horkay, Nancy; Bennett, Randy Elliott; Allen, Nancy; Kaplan, Bruce; Yan, Fred – Journal of Technology, Learning, and Assessment, 2006
This study investigated the comparability of scores for paper and computer versions of a writing test administered to eighth grade students. Two essay prompts were given on paper to a nationally representative sample as part of the 2002 main NAEP writing assessment. The same two essay prompts were subsequently administered on computer to a second…
Descriptors: Writing Evaluation, Writing Tests, Computer Assisted Testing, Program Effectiveness