NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)0
Since 2006 (last 20 years)6
Education Level
Higher Education2
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Showing all 13 results Save | Export
Mitchell, Alison M.; Truckenmiller, Adrea; Petscher, Yaacov – Communique, 2015
As part of the Race to the Top initiative, the United States Department of Education made nearly 1 billion dollars available in State Educational Technology grants with the goal of ramping up school technology. One result of this effort is that states, districts, and schools across the country are using computerized assessments to measure their…
Descriptors: Computer Assisted Testing, Educational Technology, Testing, Efficiency
Peer reviewed Peer reviewed
Direct linkDirect link
Bailey, Janelle M.; Johnson, Bruce; Prather, Edward E.; Slater, Timothy F. – International Journal of Science Education, 2012
Concept inventories (CIs)--typically multiple-choice instruments that focus on a single or small subset of closely related topics--have been used in science education for more than a decade. This paper describes the development and validation of a new CI for astronomy, the "Star Properties Concept Inventory" (SPCI). Questions cover the areas of…
Descriptors: Educational Strategies, Validity, Testing, Astronomy
Peer reviewed Peer reviewed
Direct linkDirect link
Darrah, Marjorie; Fuller, Edgar; Miller, David – Journal of Computers in Mathematics and Science Teaching, 2010
This paper discusses a possible solution to a problem frequently encountered by educators seeking to use computer-based or multiple choice-based exams for mathematics. These assessment methodologies force a discrete grading system on students and do not allow for the possibility of partial credit. The research presented in this paper investigates…
Descriptors: College Students, College Mathematics, Calculus, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Vannest, Kimberly J.; Parker, Richard I.; Davis, John L.; Soares, Denise A.; Smith, Stacey L. – Behavioral Disorders, 2012
More and more, schools are considering the use of progress monitoring data for high-stakes decisions such as special education eligibility, program changes to more restrictive environments, and major changes in educational goals. Those high-stakes types of data-based decisions will need methodological defensibility. Current practice for…
Descriptors: Decision Making, Educational Change, Regression (Statistics), Field Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Brooks, Lindsay – Language Testing, 2009
This study, framed within sociocultural theory, examines the interaction of adult ESL test-takers in two tests of oral proficiency: one in which they interacted with an examiner (the individual format) and one in which they interacted with another student (the paired format). The data for the eight pairs in this study were drawn from a larger…
Descriptors: Testing, Rating Scales, Program Effectiveness, Interaction
Burton, Robert S. – New Directions for Testing and Measurement, 1980
Although Model A, the only norm-referenced evaluation procedure in the Title I Evaluation and Reporting System, requires no data other than the test scores themselves, it introduces two sources of bias and involved three test administrations. Roberts' two-test procedure offers the advantages of less bias and less testing. (RL)
Descriptors: Comparative Analysis, Mathematical Formulas, Scores, Statistical Bias
Svinicki, Marilla; Koch, Bill – Innovation Abstracts, 1984
The decision of whether to use essay tests or multiple choice tests depends on several qualifiers related to the different characteristics of the tests and the needs of the situation. The most important qualifier involves matching the type of test to the instructional objectives being tested, with multiple choice tests being used to measure a…
Descriptors: Comparative Analysis, Essay Tests, Multiple Choice Tests, Test Format
Peer reviewed Peer reviewed
Williams, Richard H.; Zimmerman, Donald W. – Journal of Experimental Education, 1982
The reliability of simple difference scores is greater than, less than, or equal to that of residualized difference scores, depending on whether the correlation between pretest and posttest scores is greater than, less than, or equal to the ratio of the standard deviations of pretest and posttest scores. (Author)
Descriptors: Achievement Gains, Comparative Analysis, Correlation, Pretests Posttests
Peer reviewed Peer reviewed
Direct linkDirect link
Dudley, Albert – Language Testing, 2006
This study examined the multiple true-false (MTF) test format in second language testing by comparing multiple-choice (MCQ) and multiple true-false (MTF) test formats in two language areas of general English: vocabulary and reading. Two counter-balanced experimental designs--one for each language area--were examined in terms of the number of MCQ…
Descriptors: Second Language Learning, Test Format, Validity, Testing
Choppin, Bruce; And Others – 1982
A detailed description of five latent structure models of achievement measurement is presented. The first project paper, by David L. McArthur, analyzes the history of mental testing to show how conventional item analysis procedures were developed, and how dissatisfaction with them has led to fragmentation. The range of distinct conceptual and…
Descriptors: Academic Achievement, Achievement Tests, Comparative Analysis, Data Analysis
Takala, Sauli – 1998
This paper discusses recent developments in language testing. It begins with a review of the traditional criteria that are applied to all measurement and outlines recent emphases that derive from the expanding range of stakeholders. Drawing on Alderson's seminal work, criteria are presented for evaluating communicative language tests. Developments…
Descriptors: Alternative Assessment, Communicative Competence (Languages), Comparative Analysis, Evaluation Criteria
Jacobs, Lucy Cheser; Chase, Clinton I. – 1992
This book offers specific how-to advice to college faculty on every stage of the testing process, including planning the test and classifying objectives to be measured, ensuring the validity and reliability of the test, and grading in such a way as to arrive at fair grades based on relevant data. The book examines the strengths and weaknesses of…
Descriptors: Cheating, College Faculty, Comparative Analysis, Computer Assisted Testing
van Weeren, J., Ed. – 1983
Presented in this symposium reader are nine papers, four of which deal with the theory and impact of the Rasch model on language testing and five of which discuss final examinations in secondary schools in both general and specific terms. The papers are: "Introduction to Rasch Measurement: Some Implications for Language Testing" (J. J.…
Descriptors: Adolescents, Comparative Analysis, Comparative Education, Difficulty Level