NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)3
Since 2006 (last 20 years)7
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 13 results Save | Export
Nebraska Department of Education, 2020
The Spring 2020 Nebraska Student-Centered Assessment System (NSCAS) General Summative testing was cancelled due to COVID-19. This technical report documents the processes and procedures that had been implemented to support the Spring 2020 assessments prior to the cancellation. The following sections are presented in this technical report: (1)…
Descriptors: English, Language Arts, Mathematics Tests, Science Tests
Nebraska Department of Education, 2019
This technical report documents the processes and procedures implemented to support the Spring 2019 Nebraska Student-Centered Assessment System (NSCAS) General Summative English Language Arts (ELA), Mathematics, and Science assessments by NWEA® under the supervision of the Nebraska Department of Education (NDE). The technical report shows how the…
Descriptors: English, Language Arts, Summative Evaluation, Mathematics Tests
Barnett, Elisabeth A.; Reddy, Vikash – Center for the Analysis of Postsecondary Readiness, 2017
Many postsecondary institutions, and community colleges in particular, require that students demonstrate specified levels of literacy and numeracy before taking college-level courses. Typically, students have been assessed using two widely available tests--ACCUPLACER and Compass. However, placement testing practice is beginning to change for three…
Descriptors: Student Placement, College Entrance Examinations, Educational Practices, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Wandall, Jakob – Journal of Applied Testing Technology, 2011
Testing and test results can be used in different ways. They can be used for regulation and control, but they can also be a pedagogic tool for assessment of student proficiency in order to target teaching, improve learning and facilitate local pedagogical leadership. To serve these purposes the test has to be used for low stakes purposes, and to…
Descriptors: Test Results, Standardized Tests, Information Technology, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Chen, Li-Ju; Ho, Rong-Guey; Yen, Yung-Chin – Educational Technology & Society, 2010
This study aimed to explore the effects of marking and metacognition-evaluated feedback (MEF) in computer-based testing (CBT) on student performance and review behavior. Marking is a strategy, in which students place a question mark next to a test item to indicate an uncertain answer. The MEF provided students with feedback on test results…
Descriptors: Feedback (Response), Test Results, Test Items, Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Wu, Jianjun; Zhang, Yixin – Educational Media International, 2010
An increasing number of K-12 school teachers have been using handheld, or palmtop, computers in the classroom as an integral means of facilitating education due to the flexibility, mobility, interactive learning capability, and comparatively inexpensive cost. This study involved two experiments in handheld computers: (a) a comparison of the…
Descriptors: Test Results, Spelling, Learning Processes, Educational Technology
Peer reviewed Peer reviewed
Direct linkDirect link
Jodoin, Michael G.; Zenisky, April; Hambleton, Ronald K. – Applied Measurement in Education, 2006
Many credentialing agencies today are either administering their examinations by computer or are likely to be doing so in the coming years. Unfortunately, although several promising computer-based test designs are available, little is known about how well they function in examination settings. The goal of this study was to compare fixed-length…
Descriptors: Computers, Test Results, Psychometrics, Computer Simulation
Peer reviewed Peer reviewed
Switzer, Deborah M.; Connell, Michael L. – Educational Measurement: Issues and Practice, 1990
Two easy-to-use microcomputer programs, the Student Problem Package and the Test Analysis Package, both by D. L. Harnisch et al. (1985), are described. These programs efficiently analyze test data for teachers. (SLD)
Descriptors: Classroom Techniques, Computer Assisted Testing, Computer Software, Data Analysis
Slater, Sharon C.; Schaeffer, Gary A. – 1996
The General Computer Adaptive Test (CAT) of the Graduate Record Examinations (GRE) includes three operational sections that are separately timed and scored. A "no score" is reported if the examinee answers fewer than 80% of the items or if the examinee does not answer all of the items and leaves the section before time expires. The 80%…
Descriptors: Adaptive Testing, College Students, Computer Assisted Testing, Equal Education
Peer reviewed Peer reviewed
Fisher, Thomas M.; Smith, Julia – Educational Measurement: Issues and Practice, 1991
Incidents affecting the implementation of large-scale testing programs are described to illustrate associated problems. Issues addressed include creation of test materials, preparation of answer documents, transportation of test materials, scoring and analysis of tests, and dissemination and utilization of test results. (TJH)
Descriptors: Answer Keys, Computer Assisted Testing, Information Dissemination, Program Implementation
Stocking, Martha L. – 1988
The construction of parallel editions of conventional tests for purposes of test security while maintaining score comparability has always been a recognized and difficult problem in psychometrics and test construction. The introduction of new modes of test construction, e.g., adaptive testing, changes the nature of the problem, but does not make…
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Identification
Schaeffer, Gary A.; And Others – 1995
This report summarizes the results from two studies. The first assessed the comparability of scores derived from linear computer-based (CBT) and computer adaptive (CAT) versions of the three Graduate Record Examinations (GRE) General Test measures. A verbal CAT was taken by 1,507, a quantitative CAT by 1,354, and an analytical CAT by 995…
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Equated Scores
Wise, Steven L. – 1996
In recent years, a controversy has arisen about the advisability of allowing examinees to review their test items and possibly change answers. Arguments for and against allowing item review are discussed, and issues that a test designer should consider when designing a Computerized Adaptive Test (CAT) are identified. Most CATs do not allow…
Descriptors: Achievement Gains, Adaptive Testing, Computer Assisted Testing, Error Correction