Publication Date
In 2025 | 1 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 3 |
Since 2016 (last 10 years) | 6 |
Since 2006 (last 20 years) | 9 |
Descriptor
Source
Author
Al-Nassri, Sabah | 1 |
Anderson, Scarvia B. | 1 |
Angoff, William H. | 1 |
Bailey, Jennifer | 1 |
Baldwin, Peter | 1 |
Basu, Jayanti | 1 |
Beetham, James | 1 |
Benson, Nicholas | 1 |
Blixt, Sonia L. | 1 |
Borg, Mark G. | 1 |
Bracey, Gerald W. | 1 |
More ▼ |
Publication Type
Education Level
Location
Canada | 1 |
India | 1 |
Italy | 1 |
Malta | 1 |
Netherlands | 1 |
New Jersey | 1 |
Ohio | 1 |
USSR | 1 |
United Kingdom (Great Britain) | 1 |
United States | 1 |
Uruguay | 1 |
More ▼ |
Laws, Policies, & Programs
Education for All Handicapped… | 1 |
Individuals with Disabilities… | 1 |
National Defense Education Act | 1 |
No Child Left Behind Act 2001 | 1 |
Assessments and Surveys
National Assessment of… | 4 |
International English… | 1 |
Iowa Tests of Basic Skills | 1 |
Law School Admission Test | 1 |
SAT (College Admission Test) | 1 |
Test of English for… | 1 |
What Works Clearinghouse Rating
Philipp Sterner; Kim De Roover; David Goretzko – Structural Equation Modeling: A Multidisciplinary Journal, 2025
When comparing relations and means of latent variables, it is important to establish measurement invariance (MI). Most methods to assess MI are based on confirmatory factor analysis (CFA). Recently, new methods have been developed based on exploratory factor analysis (EFA); most notably, as extensions of multi-group EFA, researchers introduced…
Descriptors: Error of Measurement, Measurement Techniques, Factor Analysis, Structural Equation Models
Baldwin, Peter; Clauser, Brian E. – Journal of Educational Measurement, 2022
While score comparability across test forms typically relies on common (or randomly equivalent) examinees or items, innovations in item formats, test delivery, and efforts to extend the range of score interpretation may require a special data collection before examinees or items can be used in this way--or may be incompatible with common examinee…
Descriptors: Scoring, Testing, Test Items, Test Format
Schmidgall, Jonathan; Cid, Jaime; Carter Grissom, Elizabeth; Li, Lucy – ETS Research Report Series, 2021
The redesigned "TOEIC Bridge"® tests were designed to evaluate test takers' English listening, reading, speaking, and writing skills in the context of everyday adult life. In this paper, we summarize the initial validity argument that supports the use of test scores for the purpose of selection, placement, and evaluation of a test…
Descriptors: Language Tests, Second Language Learning, English (Second Language), Language Proficiency
Basu, Jayanti – International Journal of School & Educational Psychology, 2016
Intelligence testing was one of the earliest interests of psychologists in India. Adaptation of Western intelligence tests has been a focus of psychologists in the first half of the last century. Indigenous development of intelligence tests has been attempted, but diversity of language and culture, complexity of school systems, and infrastructural…
Descriptors: Intelligence Tests, Foreign Countries, School Psychology, Test Interpretation
Rupp, André A. – Applied Measurement in Education, 2018
This article discusses critical methodological design decisions for collecting, interpreting, and synthesizing empirical evidence during the design, deployment, and operational quality-control phases for automated scoring systems. The discussion is inspired by work on operational large-scale systems for automated essay scoring but many of the…
Descriptors: Design, Automation, Scoring, Test Scoring Machines
W. James Popham; David C. Berliner; Neal M. Kingston; Susan H. Fuhrman; Steven M. Ladd; Jeffrey Charbonneau; Madhabi Chatterji – Quality Assurance in Education: An International Perspective, 2014
Purpose: Against a backdrop of high-stakes assessment policies in the USA, this paper explores the challenges, promises and the "state of the art" with regard to designing standardized achievement tests and educational assessment systems that are instructionally useful. Authors deliberate on the consequences of using inappropriately…
Descriptors: Standardized Tests, High Stakes Tests, Student Evaluation, Achievement Tests
Kranzler, John H.; Benson, Nicholas; Floyd, Randy G. – International Journal of School & Educational Psychology, 2016
This article briefly reviews the history of intellectual assessment of children and youth in the United States of America, as well as current practices and future directions. Although administration of intelligence tests in the schools has been a longstanding practice in the United States, their use has also elicited sharp controversy over time.…
Descriptors: Intelligence Tests, Children, Youth, Test Construction

Stone, Mark H.; Wright, Benjamin D.; Stenner, A. Jackson – Journal of Outcome Measurement, 1999
Describes mapping variables, the principal technique for planning and constructing a test or rating instrument. A variable map is also useful for interpreting results. Provides several maps to show the importance and value of mapping a variable by person and item data. (Author/SLD)
Descriptors: Planning, Rating Scales, Research Methodology, Test Construction

Dinero, Thomas E.; Blixt, Sonia L. – College Teaching, 1988
Takahiro Sato, a Japanese engineer, has developed a method to study the composition of an objective test by finding out how individual students are responding to the test's items and by summarizing this information in a single index. The Student-Problem Chart is discussed. (MLW)
Descriptors: College Students, Higher Education, Performance, Test Construction
Bailey, Jennifer; Little, Chelsea; Rigney, Rex; Thaler, Anna; Weiderman, Ken; Yorkovich, Ben – Online Submission, 2010
This handbook is designed as a quick reference for first-year teachers who find themselves in an assessment driven environment with little experience to help make sense of the language, underlying philosophy, or organizational structure of the assessment system. The handbook begins with advice on developing and evaluating effective learning…
Descriptors: Student Evaluation, Portfolio Assessment, Elementary Secondary Education, Performance Based Assessment

Yaney, Joseph P. – Performance Improvement, 1997
Offers suggestions for designing a management questionnaire and interpreting employee responses so that executives may make an informed decision on whether to support an intervention. Highlights include employee perceptions on competing goals; supervisory suggestions and employee reactions; and a case study. (Author/LRW)
Descriptors: Case Studies, Employee Attitudes, Questionnaires, Supervision
Bracey, Gerald W. – 2000
This fastback provides information about what tests can and cannot do, how they are constructed, and how they are used and misused. It offers suggestions about how best to interpret test results. The chapters are: (1) "A Test on Testing"; (2) "Basic Considerations"; (3) "Standardized Tests"; (4) "Performance Tests"; (5) "Interpreting Test Scores";…
Descriptors: Achievement Tests, Elementary Secondary Education, Standardized Tests, Test Construction
Shermis, Mark D.; DiVesta, Francis J. – Rowman & Littlefield Publishers, Inc., 2011
"Classroom Assessment in Action" clarifies the multi-faceted roles of measurement and assessment and their applications in a classroom setting. Comprehensive in scope, Shermis and Di Vesta explain basic measurement concepts and show students how to interpret the results of standardized tests. From these basic concepts, the authors then…
Descriptors: Student Evaluation, Standardized Tests, Scores, Measurement
Somwaru, Jwalla P. – 1982
Disadvantages of traditional intelligence tests with handicapped children are discussed, and an alternative approach, The "Assessment of Basic Competencies" (ABC) is presented. The background and design of the ABC and the three domains of the model (language skills, math reasoning skills, and information processing skills) are…
Descriptors: Disabilities, Elementary Secondary Education, Evaluation Methods, Handicap Identification
Pinto, Maria Antonietta; Titone, Renzo – Rassegna Italiana di Linguistica Applicata, 1989
Discusses the development, administration, and evaluation of the Test of Metalinguistic Ability (TAM). Developed at the University of Rome, the TAM consists of 96 items divided into 6 parts: comprehension, synonyms, acceptability, ambiguity, grammatical function, and phonemic segmentation. (CFM)
Descriptors: Cognitive Processes, Cognitive Tests, Foreign Countries, Language Research