Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 7 |
Descriptor
Source
Journal of Technology,… | 12 |
Author
Allen, Nancy | 1 |
Attali, Yigal | 1 |
Banerjee, Manju | 1 |
Bebell, Damian | 1 |
Bennett, Randy Elliott | 1 |
Burstein, Jill | 1 |
Cassady, Jerrell C. | 1 |
Chun, Euljung | 1 |
Dikli, Semire | 1 |
Dolan, Robert P. | 1 |
Drake, Samuel | 1 |
More ▼ |
Publication Type
Journal Articles | 12 |
Reports - Research | 6 |
Reports - Descriptive | 3 |
Reports - Evaluative | 3 |
Education Level
Elementary Secondary Education | 9 |
Elementary Education | 5 |
Higher Education | 5 |
Postsecondary Education | 5 |
Grade 3 | 1 |
Grade 4 | 1 |
Grade 7 | 1 |
Grade 8 | 1 |
High Schools | 1 |
Middle Schools | 1 |
Secondary Education | 1 |
More ▼ |
Audience
Location
Kansas | 1 |
Massachusetts | 1 |
Vermont | 1 |
Laws, Policies, & Programs
Individuals with Disabilities… | 1 |
Assessments and Surveys
Graduate Record Examinations | 1 |
Massachusetts Comprehensive… | 1 |
What Works Clearinghouse Rating
Bebell, Damian; Kay, Rachel – Journal of Technology, Learning, and Assessment, 2010
This paper examines the educational impacts of the Berkshire Wireless Learning Initiative (BWLI), a pilot program that provided 1:1 technology access to all students and teachers across five public and private middle schools in western Massachusetts. Using a pre/post comparative study design, the current study explores a wide range of program…
Descriptors: Research Design, Middle Schools, Pilot Projects, Research Methodology
Scalise, Kathleen; Gifford, Bernard – Journal of Technology, Learning, and Assessment, 2006
Technology today offers many new opportunities for innovation in educational assessment through rich new assessment tasks and potentially powerful scoring, reporting and real-time feedback mechanisms. One potential limitation for realizing the benefits of computer-based assessment in both instructional assessment and large scale testing comes in…
Descriptors: Electronic Learning, Educational Assessment, Information Technology, Classification
Gu, Lixiong; Drake, Samuel; Wolfe, Edward W. – Journal of Technology, Learning, and Assessment, 2006
This study seeks to determine whether item features are related to observed differences in item difficulty (DIF) between computer- and paper-based test delivery media. Examinees responded to 60 quantitative items similar to those found on the GRE general test in either a computer-based or paper-based medium. Thirty-eight percent of the items were…
Descriptors: Test Bias, Test Items, Educational Testing, Student Evaluation
Dikli, Semire – Journal of Technology, Learning, and Assessment, 2006
Automated Essay Scoring (AES) is defined as the computer technology that evaluates and scores the written prose (Shermis & Barrera, 2002; Shermis & Burstein, 2003; Shermis, Raymat, & Barrera, 2003). AES systems are mainly used to overcome time, cost, reliability, and generalizability issues in writing assessment (Bereiter, 2003; Burstein,…
Descriptors: Scoring, Writing Evaluation, Writing Tests, Standardized Tests
Attali, Yigal; Burstein, Jill – Journal of Technology, Learning, and Assessment, 2006
E-rater[R] has been used by the Educational Testing Service for automated essay scoring since 1999. This paper describes a new version of e-rater (V.2) that is different from other automated essay scoring systems in several important respects. The main innovations of e-rater V.2 are a small, intuitive, and meaningful set of features used for…
Descriptors: Educational Testing, Test Scoring Machines, Scoring, Writing Evaluation
Poggio, John; Glasnapp, Douglas R.; Yang, Xiangdong; Poggio, Andrew J. – Journal of Technology, Learning, and Assessment, 2005
The present study reports results from a quasi-controlled empirical investigation addressing the impact on student test scores when using fixed form computer based testing (CBT) versus paper and pencil (P&P) testing as the delivery mode to assess student mathematics achievement in a state's large scale assessment program. Grade 7 students…
Descriptors: Mathematics Achievement, Measures (Individuals), Program Effectiveness, Measurement
Johnson, Martin; Green, Sylvia – Journal of Technology, Learning, and Assessment, 2006
The transition from paper-based to computer-based assessment raises a number of important issues about how mode might affect children's performance and question answering strategies. In this project 104 eleven-year-olds were given two sets of matched mathematics questions, one set on-line and the other on paper. Facility values were analyzed to…
Descriptors: Student Attitudes, Computer Assisted Testing, Program Effectiveness, Elementary School Students
Cassady, Jerrell C.; Gridley, Betty E. – Journal of Technology, Learning, and Assessment, 2005
This study analyzed the effects of online formative and summative assessment materials on undergraduates' experiences with attention to learners' testing behaviors (e.g., performance, study habits) and beliefs (e.g., test anxiety, perceived test threat). The results revealed no detriment to students' perceptions of tests or performances on tests…
Descriptors: Study Habits, Student Attitudes, Formative Evaluation, Testing
Higgins, Jennifer; Russell, Michael; Hoffmann, Thomas – Journal of Technology, Learning, and Assessment, 2005
To examine the impact of transitioning 4th grade reading comprehension assessments to the computer, 219 fourth graders were randomly assigned to take a one-hour reading comprehension assessment on paper, on a computer using scrolling text to navigate through passages, or on a computer using paging text to navigate through passages. This study…
Descriptors: Reading Comprehension, Performance Based Assessment, Reading Tests, Grade 4
Horkay, Nancy; Bennett, Randy Elliott; Allen, Nancy; Kaplan, Bruce; Yan, Fred – Journal of Technology, Learning, and Assessment, 2006
This study investigated the comparability of scores for paper and computer versions of a writing test administered to eighth grade students. Two essay prompts were given on paper to a nationally representative sample as part of the 2002 main NAEP writing assessment. The same two essay prompts were subsequently administered on computer to a second…
Descriptors: Writing Evaluation, Writing Tests, Computer Assisted Testing, Program Effectiveness
Ketterlin-Geller, Leanne R. – Journal of Technology, Learning, and Assessment, 2005
Universal design for assessment (UDA) is intended to increase participation of students with disabilities and English-language learners in general education assessments by addressing student needs through customized testing platforms. Computer-based testing provides an optimal format for creating individually-tailored tests. However, although a…
Descriptors: Student Needs, Disabilities, Grade 3, Second Language Learning
Dolan, Robert P.; Hall, Tracey E.; Banerjee, Manju; Chun, Euljung; Strangman, Nicole – Journal of Technology, Learning, and Assessment, 2005
Standards-based reform efforts are highly dependent on accurate assessment of all students, including those with disabilities. The accuracy of current large-scale assessments is undermined by construct-irrelevant factors including access barriers, a particular problem for students with disabilities. Testing accommodations such as the read-aloud…
Descriptors: United States History, Testing Accommodations, Test Content, Learning Disabilities