NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Does not meet standards1
Showing 616 to 630 of 1,111 results Save | Export
Peer reviewed Peer reviewed
Bennett, Randy Elliot – Education Policy Analysis Archives, 2001
Describes the many causes of pressure to change large-scale assessment in the United States and suggests that the largest factor facilitating change will be technological, especially the use of the Internet. The Internet will help revolutionize the business and substance of large-scale assessment. (SLD)
Descriptors: Computer Assisted Testing, Educational Change, Internet, State Programs
Peer reviewed Peer reviewed
Direct linkDirect link
Wilson, E. Vance – Computers and Education, 2004
This paper investigates how students' attitude and performance are affected by using an asynchronous learning network (ALN) to augment exams in a traditional lecture/lab course. Students used the ExamNet ALN to create, critique, and revise a database of questions that subsequently was drawn upon for course exams. Overall, students considered…
Descriptors: Test Construction, Student Participation, Student Evaluation, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Yin, Peng-Yeng; Chang, Kuang-Cheng; Hwang, Gwo-Jen; Hwang, Gwo-Haur; Chan, Ying – Educational Technology & Society, 2006
To accurately analyze the problems of students in learning, the composed test sheets must meet multiple assessment criteria, such as the ratio of relevant concepts to be evaluated, the average discrimination degree, difficulty degree and estimated testing time. Furthermore, to precisely evaluate the improvement of student's learning performance…
Descriptors: Student Evaluation, Performance Based Assessment, Test Construction, Computer Assisted Testing
Shermis, Mark D.; DiVesta, Francis J. – Rowman & Littlefield Publishers, Inc., 2011
"Classroom Assessment in Action" clarifies the multi-faceted roles of measurement and assessment and their applications in a classroom setting. Comprehensive in scope, Shermis and Di Vesta explain basic measurement concepts and show students how to interpret the results of standardized tests. From these basic concepts, the authors then…
Descriptors: Student Evaluation, Standardized Tests, Scores, Measurement
Dorans, Neil J.; Schmitt, Alicia P. – 1991
Differential item functioning (DIF) assessment attempts to identify items or item types for which subpopulations of examinees exhibit performance differentials that are not consistent with the performance differentials typically seen for those subpopulations on collections of items that purport to measure a common construct. DIF assessment…
Descriptors: Computer Assisted Testing, Constructed Response, Educational Assessment, Item Bias
Bergstrom, Betty A.; Stahl, John A. – 1992
This paper reports a method for assessing the adequacy of existing item banks for computer adaptive testing. The method takes into account content specifications, test length, and stopping rules, and can be used to determine if an existing item bank is adequate to administer a computer adaptive test efficiently across differing levels of examinee…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Evaluation Methods
Yan, Duanli; Lewis, Charles; Stocking, Martha – 1998
It is unrealistic to suppose that standard item response theory (IRT) models will be appropriate for all new and currently considered computer-based tests. In addition to developing new models, researchers will need to give some attention to the possibility of constructing and analyzing new tests without the aid of strong models. Computerized…
Descriptors: Adaptive Testing, Algorithms, Computer Assisted Testing, Item Response Theory
Clariana, Roy B. – 1990
Research has shown that multiple-choice questions formed by transforming or paraphrasing a reading passage provide a measure of student comprehension. It is argued that similar transformation and paraphrasing of lesson questions is an appropriate way to form parallel multiple-choice items to be used as a posttest measure of student comprehension.…
Descriptors: Comprehension, Computer Assisted Testing, Difficulty Level, Measurement Techniques
Melancon, Janet G.; Thompson, Bruce – 1987
This paper reviews the research literature regarding the importance of Witkin's theory of psychological differentiation, particularly the research on measures of field independence using perceptual disembedding tasks. The first phase of development of a multiple-choice perceptual disembedding measure, the Finding Embedded Figures Test, is…
Descriptors: Cognitive Style, Computer Assisted Testing, Field Dependence Independence, Multiple Choice Tests
New Brunswick Dept. of Advanced Education and Training, Fredericton. Interprovincial Standards Program Coordinating Committee. – 1987
In January 1985, Employment and Immigration Canada funded a pilot project in New Brunswick for the development and testing of an Interprovincial Computerized Examination Management (ICEM) System. The resulting system is comprised of a dual interprovincial and provincial item bank facility, a software component offering the option of computerized…
Descriptors: Computer Assisted Testing, Computer Graphics, Foreign Countries, Item Banks
Ackerman, Terry A. – 1989
The purpose of this paper is to report results on the development of a new computer-assisted methodology for creating parallel test forms using the item response theory (IRT) information function. Recently, several researchers have approached test construction from a mathematical programming perspective. However, these procedures require…
Descriptors: College Entrance Examinations, Computer Assisted Testing, Computer Software, Higher Education
Adema, Jos J. – 1988
A heuristic for solving large-scale zero-one programming problems is provided. The heuristic is based on the modifications made by H. Crowder et al. (1983) to the standard branch-and-bound strategy. First, the initialization is modified. The modification is only useful if the objective function values for the continuous and the zero-one…
Descriptors: Achievement Tests, Computer Assisted Testing, Heuristics, Item Banks
Murphy, Patricia A. – 1985
The videodisc, audiodisc, overlapping instruction, mapping, adaptive testing strategies, intricate instructional branching, and complex media selection models are common expectations for quality courseware today. A two team development structure, consisting of an instructional development team and a test development team, is proposed. This paper…
Descriptors: Computer Assisted Instruction, Computer Assisted Testing, Courseware, Models
PDF pending restoration PDF pending restoration
McBride, James R.; Weiss, David J. – 1974
A series of four vocabulary norming tests was used to develop a large, homogeneous pool of vocabulary test items for use in computer-administered adaptive testing research. Five hundred seventy-five unique vocabulary knowledge items were divided among the four norming tests, and administered to separate groups of college undergraduates. Norming…
Descriptors: Adaptive Testing, College Students, Computer Assisted Testing, Item Analysis
Peer reviewed Peer reviewed
Wainer, Howard; Kiely, Gerard L. – Journal of Educational Measurement, 1987
The testlet, a bundle of test items, alleviates some problems associated with computerized adaptive testing: context effects, lack of robustness, and item difficulty ordering. While testlets may be linear or hierarchical, the most useful ones are four-level hierarchical units, containing 15 items and partitioning examinees into 16 classes. (GDC)
Descriptors: Adaptive Testing, Computer Assisted Testing, Context Effect, Item Banks
Pages: 1  |  ...  |  38  |  39  |  40  |  41  |  42  |  43  |  44  |  45  |  46  |  ...  |  75