NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 1 to 15 of 437 results Save | Export
Wang, Huan; Kim, Dong-In – Online Submission, 2022
A fundamental premise in assessment is that the underlying construct is equivalent across different groups of students and that this structure does not vary over years. The pandemic has potentially impacted opportunity to learn and disrupted the internal structure of assessments in various ways. Past research has suggested that students tended to…
Descriptors: Measurement, Error of Measurement, COVID-19, Pandemics
Peer reviewed Peer reviewed
Direct linkDirect link
Perry, Lindsey – AERA Online Paper Repository, 2017
Before an assessment is used to make decisions, the validity of the intended interpretation must be evaluated. The purpose of this paper is to describe how the argument-based approach and an interpretation/use argument (IUA) (Kane, 2013) were used to validate the interpretations made from the new Early Grade Mathematics Assessment (EGMA)…
Descriptors: Student Evaluation, Mathematics Tests, Test Interpretation, Inferences
Powell, J. C. – International Association for Development of the Information Society, 2013
This reflection paper challenges current test scoring practices on the grounds that most wrong-answer selections are thoughtful not random, presenting research supporting this proposition. An alternative test scoring system is presented, described and its outcomes discussed. This new scoring system increases the number of variables considered,…
Descriptors: Test Theory, Test Interpretation, Scoring, Multiple Choice Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ekmekci, Adem; Carmona, Guadalupe – North American Chapter of the International Group for the Psychology of Mathematics Education, 2014
Accurate interpretations of large-scale assessment results and sound judgments about students' mathematical literacy depend on these assessments' validity and reliability. One important type of evidence towards this validation is the dimensionality analysis, which explores the conformity between the intended factorial structure (related closely to…
Descriptors: Numeracy, Achievement Tests, Foreign Countries, International Assessment
Feuer, Michael J. – Educational Testing Service, 2011
Few arguments about education are as effective at galvanizing public attention and motivating political action as those that compare the performance of students with their counterparts in other countries and that connect academic achievement to economic performance. Because data from international large-scale assessments (ILSA) have a powerful…
Descriptors: International Assessment, Test Interpretation, Testing Problems, Comparative Testing
Stoneberg, Bert D. – Online Submission, 2009
Test developers are responsible to define how test scores should be interpreted and used. The No Child Left Behind Act of 2001 (NCLB) directed the Secretary of Education to use results from the National Assessment of Educational Progress (NAEP) to confirm the proficiency scores from state developed tests. There are two sets of federal definitions…
Descriptors: National Competency Tests, State Programs, Achievement Tests, Scores
Lang, W. Steve; Wilkerson, Judy R. – Online Submission, 2008
The National Council for Accreditation of Teacher Education (NCATE, 2002) requires teacher education units to develop assessment systems and evaluate both the success of candidates and unit operations. Because of a stated, but misguided, fear of statistics, NCATE fails to use accepted terminology to assure the quality of institutional evaluative…
Descriptors: State Standards, Validity, Resource Materials, Reliability
Blackburn, Rhonda D. – 2001
Profile analysis refers to interpreting or analyzing the pattern of tests, subtests, or scores. The analysis may be across groups or across scores for one individual. This approach to analyzing data is being used by clinicians to help in the translation of the results of popular assessment instruments. This paper examines several examples of the…
Descriptors: Intelligence Tests, Profiles, Scores, Test Interpretation
Fraas, John W.; Drushal, J. Michael; Graham, Jeff – 2002
This paper presents a method designed to assist practitioners in the interpretation of the practical significance of a statistically significant logistic regression coefficient is presented. To avoid the interpretation problems encountered when using the traditionally reported change in either the log odds or odds values, this method centers the…
Descriptors: Computer Software, Probability, Regression (Statistics), Test Interpretation
Kane, Michael – 2000
Validity is concerned with the clarification and justification of the intended interpretations and uses of observed scores. It has not been easy to formulate a general methodology set of principles for validation, but progress has been made, especially as the field has moved from relatively limited criterion-related models to sophisticated…
Descriptors: Scores, Test Interpretation, Test Results, Theories
Woolley, Kristin K. – 1996
The theory of score validity has undergone several revisions within the measurement community. The current consensus among professionals is a rejection of the trinitarian doctrine (J. P. Guion, 1980) of score validity and the recognition of a unified view that includes social consequences of test interpretation and use. While some aspects of the…
Descriptors: Models, Scores, Standards, Test Interpretation
Shannon, Gregory A. – 1986
The types of test score interpretive information considered useful to failing examinees were studied through interviews with Educational Testing Service (ETS) staff members. Research literature on interpretive information for failing test takers was reviewed, and procedures currently used at ETS were determined. Managers of 23 testing programs…
Descriptors: Criterion Referenced Tests, Failure, Feedback, Scoring
Green, Donald Ross – 1979
Sources of test bias are discussed and steps to prevent or reduce bias in tests are listed. Test bias can occur because of the way test materials are written, the conditions of administration, and the interpretations given the results. Steps to prevent or reduce bias arising in the test development process include: (1) using heterogeneous sets of…
Descriptors: Educational Testing, Test Bias, Test Construction, Test Interpretation
McMillan, James H. – 2000
Implicit in the work of S. Huck and H. Sandler (1979) is the idea that the concept "rival hypotheses" refers to some kind of alternative explanation. Rather than being threats to internal validity, rival hypotheses, in their view, are interpretations that differ from those of the researcher. This paper broadens the idea to include any…
Descriptors: Classification, Educational Research, Hypothesis Testing, Test Interpretation
Kane, Michael – 1999
The relationship between generalizability and validity is explained, making four important points. The first is that generalizability coefficients provide upper bounds on validity. The second point is that generalization is one step in most interpretive arguments, and therefore, generalizability is a necessary condition for the validity of these…
Descriptors: Error of Measurement, Generalizability Theory, Test Interpretation, Validity
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  30