NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)0
Since 2006 (last 20 years)13
Source
Journal of Technology,…19
Audience
Laws, Policies, & Programs
Individuals with Disabilities…1
What Works Clearinghouse Rating
Showing 1 to 15 of 19 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Mislevy, Robert J.; Behrens, John T.; Bennett, Randy E.; Demark, Sarah F.; Frezzo, Dennis C.; Levy, Roy; Robinson, Daniel H.; Rutstein, Daisy Wise; Shute, Valerie J.; Stanley, Ken; Winters, Fielding I. – Journal of Technology, Learning, and Assessment, 2010
People use external knowledge representations (KRs) to identify, depict, transform, store, share, and archive information. Learning how to work with KRs is central to be-coming proficient in virtually every discipline. As such, KRs play central roles in curriculum, instruction, and assessment. We describe five key roles of KRs in assessment: (1)…
Descriptors: Student Evaluation, Educational Technology, Computer Networks, Knowledge Representation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Almond, Patricia; Winter, Phoebe; Cameto, Renee; Russell, Michael; Sato, Edynn; Clarke-Midura, Jody; Torres, Chloe; Haertel, Geneva; Dolan, Robert; Beddow, Peter; Lazarus, Sheryl – Journal of Technology, Learning, and Assessment, 2010
This paper represents one outcome from the "Invitational Research Symposium on Technology-Enabled and Universally Designed Assessments," which examined technology-enabled assessments (TEA) and universal design (UD) as they relate to students with disabilities (SWD). It was developed to stimulate research into TEAs designed to make tests…
Descriptors: Disabilities, Inferences, Computer Assisted Testing, Alternative Assessment
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bechard, Sue; Sheinker, Jan; Abell, Rosemary; Barton, Karen; Burling, Kelly; Camacho, Christopher; Cameto, Renee; Haertel, Geneva; Hansen, Eric; Johnstone, Chris; Kingston, Neal; Murray, Elizabeth; Parker, Caroline E.; Redfield, Doris; Tucker, Bill – Journal of Technology, Learning, and Assessment, 2010
This article represents one outcome from the "Invitational Research Symposium on Technology-Enabled and Universally Designed Assessments," which examined technology-enabled assessments (TEA) and universal design (UD) as they relate to students with disabilities (SWD). It was developed to stimulate research into TEAs designed to better understand…
Descriptors: Test Validity, Disabilities, Educational Change, Evaluation Methods
Georgiadou, Elissavet; Triantafillou, Evangelos; Economides, Anastasios A. – Journal of Technology, Learning, and Assessment, 2007
Since researchers acknowledged the several advantages of computerized adaptive testing (CAT) over traditional linear test administration, the issue of item exposure control has received increased attention. Due to CAT's underlying philosophy, particular items in the item pool may be presented too often and become overexposed, while other items are…
Descriptors: Adaptive Testing, Computer Assisted Testing, Scoring, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bebell, Damian; Kay, Rachel – Journal of Technology, Learning, and Assessment, 2010
This paper examines the educational impacts of the Berkshire Wireless Learning Initiative (BWLI), a pilot program that provided 1:1 technology access to all students and teachers across five public and private middle schools in western Massachusetts. Using a pre/post comparative study design, the current study explores a wide range of program…
Descriptors: Research Design, Middle Schools, Pilot Projects, Research Methodology
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Scalise, Kathleen; Gifford, Bernard – Journal of Technology, Learning, and Assessment, 2006
Technology today offers many new opportunities for innovation in educational assessment through rich new assessment tasks and potentially powerful scoring, reporting and real-time feedback mechanisms. One potential limitation for realizing the benefits of computer-based assessment in both instructional assessment and large scale testing comes in…
Descriptors: Electronic Learning, Educational Assessment, Information Technology, Classification
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Rudner, Lawrence M.; Garcia, Veronica; Welch, Catherine – Journal of Technology, Learning, and Assessment, 2006
This report provides a two-part evaluation of the IntelliMetric[SM] automated essay scoring system based on its performance scoring essays from the Analytic Writing Assessment of the Graduate Management Admission Test[TM] (GMAT[TM]). The IntelliMetric system performance is first compared to that of individual human raters, a Bayesian system…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Essays
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Gu, Lixiong; Drake, Samuel; Wolfe, Edward W. – Journal of Technology, Learning, and Assessment, 2006
This study seeks to determine whether item features are related to observed differences in item difficulty (DIF) between computer- and paper-based test delivery media. Examinees responded to 60 quantitative items similar to those found on the GRE general test in either a computer-based or paper-based medium. Thirty-eight percent of the items were…
Descriptors: Test Bias, Test Items, Educational Testing, Student Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Dikli, Semire – Journal of Technology, Learning, and Assessment, 2006
Automated Essay Scoring (AES) is defined as the computer technology that evaluates and scores the written prose (Shermis & Barrera, 2002; Shermis & Burstein, 2003; Shermis, Raymat, & Barrera, 2003). AES systems are mainly used to overcome time, cost, reliability, and generalizability issues in writing assessment (Bereiter, 2003; Burstein,…
Descriptors: Scoring, Writing Evaluation, Writing Tests, Standardized Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Attali, Yigal; Burstein, Jill – Journal of Technology, Learning, and Assessment, 2006
E-rater[R] has been used by the Educational Testing Service for automated essay scoring since 1999. This paper describes a new version of e-rater (V.2) that is different from other automated essay scoring systems in several important respects. The main innovations of e-rater V.2 are a small, intuitive, and meaningful set of features used for…
Descriptors: Educational Testing, Test Scoring Machines, Scoring, Writing Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Poggio, John; Glasnapp, Douglas R.; Yang, Xiangdong; Poggio, Andrew J. – Journal of Technology, Learning, and Assessment, 2005
The present study reports results from a quasi-controlled empirical investigation addressing the impact on student test scores when using fixed form computer based testing (CBT) versus paper and pencil (P&P) testing as the delivery mode to assess student mathematics achievement in a state's large scale assessment program. Grade 7 students…
Descriptors: Mathematics Achievement, Measures (Individuals), Program Effectiveness, Measurement
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Johnson, Martin; Green, Sylvia – Journal of Technology, Learning, and Assessment, 2006
The transition from paper-based to computer-based assessment raises a number of important issues about how mode might affect children's performance and question answering strategies. In this project 104 eleven-year-olds were given two sets of matched mathematics questions, one set on-line and the other on paper. Facility values were analyzed to…
Descriptors: Student Attitudes, Computer Assisted Testing, Program Effectiveness, Elementary School Students
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Cassady, Jerrell C.; Gridley, Betty E. – Journal of Technology, Learning, and Assessment, 2005
This study analyzed the effects of online formative and summative assessment materials on undergraduates' experiences with attention to learners' testing behaviors (e.g., performance, study habits) and beliefs (e.g., test anxiety, perceived test threat). The results revealed no detriment to students' perceptions of tests or performances on tests…
Descriptors: Study Habits, Student Attitudes, Formative Evaluation, Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Chung, Gregory K. W. K.; Shel, Tammy; Kaiser, William J. – Journal of Technology, Learning, and Assessment, 2006
We examined a novel formative assessment and instructional approach with 89 students in three electrical engineering classes in special computer-based discussion sections. The technique involved students individually solving circuit problems online, with their real-time responses observed by the instructor. While exploratory, survey and…
Descriptors: Student Problems, Formative Evaluation, Engineering, Engineering Education
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Higgins, Jennifer; Russell, Michael; Hoffmann, Thomas – Journal of Technology, Learning, and Assessment, 2005
To examine the impact of transitioning 4th grade reading comprehension assessments to the computer, 219 fourth graders were randomly assigned to take a one-hour reading comprehension assessment on paper, on a computer using scrolling text to navigate through passages, or on a computer using paging text to navigate through passages. This study…
Descriptors: Reading Comprehension, Performance Based Assessment, Reading Tests, Grade 4
Previous Page | Next Page ยป
Pages: 1  |  2