NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
National Defense Education Act1
What Works Clearinghouse Rating
Showing 1 to 15 of 40 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
B. Goecke; S. Weiss; B. Barbot – Journal of Creative Behavior, 2025
The present paper questions the content validity of the eight creativity-related self-report scales available in PISA 2022's context questionnaire and provides a set of considerations for researchers interested in using these indexes. Specifically, we point out some threats to the content validity of these scales (e.g., "creative thinking…
Descriptors: Creativity, Creativity Tests, Questionnaires, Content Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Jerrim, John – Assessment in Education: Principles, Policy & Practice, 2016
The Programme for International Assessment (PISA) is an important cross-national study of 15-year olds academic achievement. Although it has traditionally been conducted using paper-and-pencil tests, the vast majority of countries will use computer-based assessment from 2015. In this paper, we consider how cross-country comparisons of children's…
Descriptors: Foreign Countries, Achievement Tests, International Assessment, Secondary School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Traynor, Anne – Educational Assessment, 2017
Variation in test performance among examinees from different regions or national jurisdictions is often partially attributed to differences in the degree of content correspondence between local school or training program curricula, and the test of interest. This posited relationship between test-curriculum correspondence, or "alignment,"…
Descriptors: Test Items, Test Construction, Alignment (Education), Curriculum
Peer reviewed Peer reviewed
Direct linkDirect link
Schneider, M. Christina; Huff, Kristen L.; Egan, Karla L.; Gaines, Margie L.; Ferrara, Steve – Educational Assessment, 2013
A primary goal of standards-based statewide achievement tests is to classify students into achievement levels that enable valid inferences about student content area knowledge and skill. Explicating how knowledge and skills are expected to differ in complexity in achievement level descriptors, and how that complexity is related to empirical item…
Descriptors: Test Items, Difficulty Level, Achievement Tests, Test Interpretation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
National Center for Education Statistics, 2007
The purpose of this document is to provide background information that will be useful in interpreting the 2007 results from the Trends in International Mathematics and Science Study (TIMSS) by comparing its design, features, framework, and items with those of the U.S. National Assessment of Educational Progress and another international assessment…
Descriptors: National Competency Tests, Comparative Analysis, Achievement Tests, Test Items
Peer reviewed Peer reviewed
Wilcox, Rand R. – Educational and Psychological Measurement, 1979
Wilcox has described three probability models which characterize a single test item in terms of a population of examinees (ED 156 718). This note indicates indicates that similar models can be derived which characterize a single examinee in terms of an item domain. A numerical illustration is given. (Author/JKS)
Descriptors: Achievement Tests, Item Analysis, Mathematical Models, Probability
PDF pending restoration PDF pending restoration
Lenke, Joanne M.; Canner, Jane M. – 1980
Traditionally, comparable content-area forms and levels of a multi-level, multi-form achievement test series have been equated using the equipercentile method. There is some evidence to indicate, however, that better procedures are needed. A linking procedure is being used to equate simultaneously 40 different test forms of an achievement test:…
Descriptors: Achievement Tests, Difficulty Level, Elementary Secondary Education, Equated Scores
Peer reviewed Peer reviewed
Weber, Margaret B. – Educational and Psychological Measurement, 1977
Bilevel dimensionality of probability was examined via factor analysis, Rasch latent trait analysis, and classical item analysis. Results suggest that when nonstandardized measures are the criteria for achievement, relying solely on estimates of content validity may lead to erroneous interpretation of test score data. (JKS)
Descriptors: Achievement, Achievement Tests, Factor Analysis, Item Analysis
SWRL Instructional Improvement Digest, 1982
The "Instructional Improvement Digest" communicates advisory information about practical courses of action that can be implemented by teachers and administrators to improve key areas of school instruction. The series digest topics draws upon inquiry associated with the Southwest Regional Laboratory for Educational Research and Development's…
Descriptors: Achievement Tests, Educational Planning, Elementary Secondary Education, Instructional Development
Haenn, Joseph F. – 1981
Procedures for conducting functional level testing have been available for use by practitioners for some time. However, the Title I Evaluation and Reporting System (TIERS), developed in response to the educational amendments of 1974 to the Elementary and Secondary Education Act (ESEA), has provided the impetus for widespread adoption of this…
Descriptors: Achievement Tests, Difficulty Level, Scores, Scoring
Divgi, D. R. – 1980
Because it is difficult to ascertain the dimensionality of a test composed of binary items through the use of factor analysis alone, a method is proposed that combines item characteristic curve (ICC) theory with factor analysis. Factor structure of tetrachoric correlations is distorted by non-normal distribution of ability. Item characteristics…
Descriptors: Achievement Tests, Error of Measurement, Factor Analysis, Factor Structure
Peer reviewed Peer reviewed
Benson, Jeri – Educational and Psychological Measurement, 1981
A review of the research on item writing, item format, test instructions, and item readability indicated the importance of instrument structure in the interpretation of test data. The effect of failing to consider these areas on the content validity of achievement test scores is discussed. (Author/GK)
Descriptors: Achievement Tests, Elementary Secondary Education, Literature Reviews, Scores
Nimmer, Donald N. – 1980
The sampling of test items found within the Science portion of selected standardized achievement test batteries is examined. The lack of sensitivity of such items to the curriculum changes often attempted in Earth Science is demonstrated. The following five standardized achievement test batteries commonly administered to pupils in grades 7-9 were…
Descriptors: Achievement Tests, Curriculum Evaluation, Earth Science, Junior High Schools
Tomsic, Margie L.; And Others – 1987
Extended caution indices (ECI) specify the degree of confidence that can be placed in an individual's test score by analyzing patterns of item response. Among the most promising of such indices are the standardized ECIs. Contrary to the literature, several instances were found, in a previous study, of nonnormal distributions of ECIs with samples…
Descriptors: Achievement Tests, Elementary Education, Goodness of Fit, Latent Trait Theory
Ohio State Univ., Columbus. Trade and Industrial Education Instructional Materials Lab. – 1978
The Ohio Vocational Achievement Tests are specially designed instruments for use by teachers, supervisors, and administrators to evaluate and diagnose vocational achievement for improving instruction in secondary vocational programs at the 11th and 12th grade levels. This guide explains the Ohio Vocational Achievement Tests and how they are used.…
Descriptors: Academic Achievement, Achievement Tests, High Schools, Scoring Formulas
Previous Page | Next Page ยป
Pages: 1  |  2  |  3