Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 19 |
Descriptor
Source
Journal of Technology,… | 21 |
Author
Bennett, Randy Elliot | 2 |
Bennett, Randy Elliott | 2 |
Higgins, Jennifer | 2 |
Kaplan, Bruce | 2 |
Lin, Chuan-Ju | 2 |
Yan, Fred | 2 |
Allen, Nancy | 1 |
Atwood, Kristin | 1 |
Banerjee, Manju | 1 |
Bebell, Damian | 1 |
Behrens, John T. | 1 |
More ▼ |
Publication Type
Journal Articles | 21 |
Reports - Research | 21 |
Education Level
Elementary Secondary Education | 6 |
Elementary Education | 5 |
Higher Education | 5 |
Grade 8 | 4 |
High Schools | 4 |
Postsecondary Education | 3 |
Middle Schools | 2 |
Secondary Education | 2 |
Adult Education | 1 |
Grade 11 | 1 |
Grade 12 | 1 |
More ▼ |
Audience
Laws, Policies, & Programs
Individuals with Disabilities… | 1 |
Assessments and Surveys
Graduate Management Admission… | 1 |
Graduate Record Examinations | 1 |
Massachusetts Comprehensive… | 1 |
National Assessment of… | 1 |
What Works Clearinghouse Rating
Lin, Chuan-Ju – Journal of Technology, Learning, and Assessment, 2010
Assembling equivalent test forms with minimal test overlap across forms is important in ensuring test security. Chen and Lei (2009) suggested a exposure control technique to control test overlap-ordered item pooling on the fly based on the essence that test overlap rate--ordered item pooling for the first t examinees is a function of test overlap…
Descriptors: Test Length, Test Format, Evaluation Criteria, Psychometrics
Mislevy, Robert J.; Behrens, John T.; Bennett, Randy E.; Demark, Sarah F.; Frezzo, Dennis C.; Levy, Roy; Robinson, Daniel H.; Rutstein, Daisy Wise; Shute, Valerie J.; Stanley, Ken; Winters, Fielding I. – Journal of Technology, Learning, and Assessment, 2010
People use external knowledge representations (KRs) to identify, depict, transform, store, share, and archive information. Learning how to work with KRs is central to be-coming proficient in virtually every discipline. As such, KRs play central roles in curriculum, instruction, and assessment. We describe five key roles of KRs in assessment: (1)…
Descriptors: Student Evaluation, Educational Technology, Computer Networks, Knowledge Representation
Bennett, Randy Elliot; Persky, Hilary; Weiss, Andy; Jenkins, Frank – Journal of Technology, Learning, and Assessment, 2010
This paper describes a study intended to demonstrate how an emerging skill, problem solving with technology, might be measured in the National Assessment of Educational Progress (NAEP). Two computer-delivered assessment scenarios were designed, one on solving science-related problems through electronic information search and the other on solving…
Descriptors: National Competency Tests, Problem Solving, Technology Uses in Education, Computer Assisted Testing
Higgins, Jennifer; Patterson, Margaret Becker; Bozman, Martha; Katz, Michael – Journal of Technology, Learning, and Assessment, 2010
This study examined the feasibility of administering GED Tests using a computer based testing system with embedded accessibility tools and the impact on test scores and test-taker experience when GED Tests are transitioned from paper to computer. Nineteen test centers across five states successfully installed the computer based testing program,…
Descriptors: Testing Programs, Testing, Computer Uses in Education, Mathematics Tests
Lin, Chuan-Ju – Journal of Technology, Learning, and Assessment, 2008
The automated assembly of alternate test forms for online delivery provides an alternative to computer-administered, fixed test forms, or computerized-adaptive tests when a testing program migrates from paper/pencil testing to computer-based testing. The weighted deviations model (WDM) heuristic particularly promising for automated test assembly…
Descriptors: Item Response Theory, Test Theory, Comparative Analysis, Computer Assisted Testing
Bennett, Randy Elliot; Braswell, James; Oranje, Andreas; Sandene, Brent; Kaplan, Bruce; Yan, Fred – Journal of Technology, Learning, and Assessment, 2008
This article describes selected results from the Math Online (MOL) study, one of three field investigations sponsored by the National Center for Education Statistics (NCES) to explore the use of new technology in NAEP. Of particular interest in the MOL study was the comparability of scores from paper- and computer-based tests. A nationally…
Descriptors: National Competency Tests, Familiarity, Computer Assisted Testing, Mathematics Tests
Pommerich, Mary – Journal of Technology, Learning, and Assessment, 2007
Computer administered tests are becoming increasingly prevalent as computer technology becomes more readily available on a large scale. For testing programs that utilize both computer and paper administrations, mode effects are problematic in that they can result in examinee scores that are artificially inflated or deflated. As such, researchers…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Format, Scores
Walt, Nancy; Atwood, Kristin; Mann, Alex – Journal of Technology, Learning, and Assessment, 2008
The purpose of this study was to determine whether or not survey medium (electronic versus paper format) has a significant effect on the results achieved. To compare survey media, responses from elementary students to British Columbia's Satisfaction Survey were analyzed. Although this study was not experimental in design, the data set served as a…
Descriptors: Student Attitudes, Factor Analysis, Foreign Countries, Elementary School Students
Scharber, Cassandra; Dexter, Sara; Riedel, Eric – Journal of Technology, Learning, and Assessment, 2008
The purpose of this research is to analyze preservice teachers' use of and reactions to an automated essay scorer used within an online, case-based learning environment called ETIPS. Data analyzed include post-assignment surveys, a user log of students' actions within the cases, instructor-assigned scores on final essays, and interviews with four…
Descriptors: Test Scoring Machines, Essays, Student Experience, Automation
Kim, Do-Hong; Huynh, Huynh – Journal of Technology, Learning, and Assessment, 2007
This study examined comparability of student scores obtained from computerized and paper-and-pencil formats of the large-scale statewide end-of-course (EOC) examinations in the two subject areas of Algebra and Biology. Evidence in support of comparability of computerized and paper-based tests was sought by examining scale scores, item parameter…
Descriptors: Computer Assisted Testing, Measures (Individuals), Biology, Algebra
Puhan, Gautam; Boughton, Keith; Kim, Sooyeon – Journal of Technology, Learning, and Assessment, 2007
The study evaluated the comparability of two versions of a certification test: a paper-and-pencil test (PPT) and computer-based test (CBT). An effect size measure known as Cohen's d and differential item functioning (DIF) analyses were used as measures of comparability at the test and item levels, respectively. Results indicated that the effect…
Descriptors: Computer Assisted Testing, Effect Size, Test Bias, Mathematics Tests
Wang, Jinhao; Brown, Michelle Stallone – Journal of Technology, Learning, and Assessment, 2007
The current research was conducted to investigate the validity of automated essay scoring (AES) by comparing group mean scores assigned by an AES tool, IntelliMetric [TM] and human raters. Data collection included administering the Texas version of the WriterPlacer "Plus" test and obtaining scores assigned by IntelliMetric [TM] and by…
Descriptors: Test Scoring Machines, Scoring, Comparative Testing, Intermode Differences
Ben-Simon, Anat; Bennett, Randy Elliott – Journal of Technology, Learning, and Assessment, 2007
This study evaluated a "substantively driven" method for scoring NAEP writing assessments automatically. The study used variations of an existing commercial program, e-rater[R], to compare the performance of three approaches to automated essay scoring: a "brute-empirical" approach in which variables are selected and weighted solely according to…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Essays
Bebell, Damian; Kay, Rachel – Journal of Technology, Learning, and Assessment, 2010
This paper examines the educational impacts of the Berkshire Wireless Learning Initiative (BWLI), a pilot program that provided 1:1 technology access to all students and teachers across five public and private middle schools in western Massachusetts. Using a pre/post comparative study design, the current study explores a wide range of program…
Descriptors: Research Design, Middle Schools, Pilot Projects, Research Methodology
Rudner, Lawrence M.; Garcia, Veronica; Welch, Catherine – Journal of Technology, Learning, and Assessment, 2006
This report provides a two-part evaluation of the IntelliMetric[SM] automated essay scoring system based on its performance scoring essays from the Analytic Writing Assessment of the Graduate Management Admission Test[TM] (GMAT[TM]). The IntelliMetric system performance is first compared to that of individual human raters, a Bayesian system…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Essays
Previous Page | Next Page ยป
Pages: 1 | 2