NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 29 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Hryvko, Antonina V.; Zhuk, Yurii O. – Journal of Curriculum and Teaching, 2022
A feature of the presented study is a comprehensive approach to studying the reliability problem of linguistic testing results due to the several functional and variable factors impact. Contradictions and ambiguous views of scientists on the researched issues determine the relevance of this study. The article highlights the problem of equivalence…
Descriptors: Student Evaluation, Language Tests, Test Format, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Madya, Suwarsih; Retnawati, Heri; Purnawan, Ari; Putro, Nur Hidayanto Pancoro Setyo; Apino, Ezi – TEFLIN Journal: A publication on the teaching and learning of English, 2019
This explorative-descriptive study set out to examine the equivalence among Test of English Proficiency (TOEP) forms, developed by the Indonesian Testing Service Centre (ITSC) and co-founded by The Association for The Teaching of English as a Foreign Language in Indonesia (TEFLIN) and The Association of Psychology in Indonesia. Using a…
Descriptors: Language Tests, Language Proficiency, English (Second Language), Second Language Learning
National Council on Measurement in Education, 2012
Testing and data integrity on statewide assessments is defined as the establishment of a comprehensive set of policies and procedures for: (1) the proper preparation of students; (2) the management and administration of the test(s) that will lead to accurate and appropriate reporting of assessment results; and (3) maintaining the security of…
Descriptors: State Programs, Integrity, Testing, Test Preparation
Peer reviewed Peer reviewed
Direct linkDirect link
Lissitz, Robert W.; Hou, Xiaodong; Slater, Sharon Cadman – Journal of Applied Testing Technology, 2012
This article investigates several questions regarding the impact of different item formats on measurement characteristics. Constructed response (CR) items and multiple choice (MC) items obviously differ in their formats and in the resources needed to score them. As such, they have been the subject of considerable discussion regarding the impact of…
Descriptors: Computer Assisted Testing, Scoring, Evaluation Problems, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Pyle, Katie; Jones, Emily; Williams, Chris; Morrison, Jo – Educational Research, 2009
Background: All national curriculum tests in England are pre-tested as part of the development process. Differences in pupil performance between pre-test and live test are consistently found. This difference has been termed the pre-test effect. Understanding the pre-test effect is essential in the test development and selection processes and in…
Descriptors: Foreign Countries, Pretesting, Context Effect, National Curriculum
Hanick, Patricia L.; Huang, Chi-Yu – 2002
The term "equating" refers to a statistical procedure that adjusts test scores on different forms of the same examination so that scores can be interpreted interchangeably. This study examines the impact of equating with fewer items than originally planned when items have been removed from the equating set for a variety of reasons. A…
Descriptors: Equated Scores, Test Format, Test Items, Test Results
DeVito, Pasquale J., Ed.; Koenig, Judith A., Ed. – 2001
A committee of the National Research Council studied the desirability, feasibility, and potential impact of two reporting practices for National Assessment of Educational Progress (NAEP) results: district-level reporting and market-basket reporting. NAEP's sponsors believe that reporting district-level NAEP results would support state and local…
Descriptors: Elementary Secondary Education, Research Methodology, Research Reports, School Districts
Peer reviewed Peer reviewed
Politzer, Robert L.; McGroarty, Mary – International Review of Applied Linguistics in Language Teaching, 1983
Discusses the difference between communicative competence and linguistic performance. Describes the development, administration, and results of a three-part discrete point test based on rather specific definitions of communicative competence. (EKN)
Descriptors: Communicative Competence (Languages), English (Second Language), Language Tests, Linguistic Performance
Peer reviewed Peer reviewed
Schraw, Gregory – Journal of Experimental Education, 1997
The basis of students' confidence in their answers to test items was studied with 95 undergraduates. Results support the domain-general hypothesis that predicts that confidence judgments will be related to performance on a particular test and also to confidence judgments and performance on unrelated tests. (SLD)
Descriptors: Higher Education, Metacognition, Performance Factors, Scores
Peer reviewed Peer reviewed
Dochy, Filip; Moerkerke, George; De Corte, Erik; Segers, Mien – European Journal of Psychology of Education, 2001
Focuses on the discussion of whether "none of the above" (NOTA) questions should be used on tests. Discusses a study in which a protocol analysis was conducted on written statements of examinees while answering NOTA items. Explains that a multiple-choice test was given to university students finding that NOTA options seem to be more attractive.…
Descriptors: College Students, Educational Research, Higher Education, Skill Development
Lockhart, Kathleen A.; And Others – 1983
Three experiments were conducted, all employing undergraduates in college courses taught according to personalized system of instruction (PSI) principles. Experiment I examined retention as a function of the feedback delay interval in an introductory anthropology course using short-answer essay tests. Experiment II varied the feedback delay…
Descriptors: Cost Effectiveness, Feedback, Higher Education, Long Term Memory
McCall, Chester H., Jr.; Gardner, Suzanne – 1984
The Research Services of the National Education Association (NEA) conducted a nationwide teacher opinion poll (TOP) based upon a stratified disproportionate two-state cluster sample of classroom teachers. This research study was conducted to test the hypothesis that the order of presentation of items would make no difference in the conclusions…
Descriptors: Attitude Measures, Elementary Secondary Education, National Surveys, Statistical Analysis
Stocking, Martha L. – 1988
The construction of parallel editions of conventional tests for purposes of test security while maintaining score comparability has always been a recognized and difficult problem in psychometrics and test construction. The introduction of new modes of test construction, e.g., adaptive testing, changes the nature of the problem, but does not make…
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Identification
Ohio State Univ., Columbus. Trade and Industrial Education Instructional Materials Lab. – 1978
The Ohio Vocational Achievement Tests are specially designed instruments for use by teachers, supervisors, and administrators to evaluate and diagnose vocational achievement for improving instruction in secondary vocational programs at the 11th and 12th grade levels. This guide explains the Ohio Vocational Achievement Tests and how they are used.…
Descriptors: Academic Achievement, Achievement Tests, High Schools, Scoring Formulas
Peer reviewed Peer reviewed
Wester, Anita – Scandinavian Journal of Educational Research, 1995
The effect of different item formats (multiple choice and open) on gender differences in test performance was studied for the Swedish Diagrams, Tables, and Maps (DTM) test with 90 secondary school students. The change to open format resulted in no reduction in gender differences on the DTM. (SLD)
Descriptors: Aptitude Tests, Foreign Countries, Multiple Choice Tests, Scores
Previous Page | Next Page ยป
Pages: 1  |  2