NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 1 to 15 of 43 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Baldwin, Peter; Clauser, Brian E. – Journal of Educational Measurement, 2022
While score comparability across test forms typically relies on common (or randomly equivalent) examinees or items, innovations in item formats, test delivery, and efforts to extend the range of score interpretation may require a special data collection before examinees or items can be used in this way--or may be incompatible with common examinee…
Descriptors: Scoring, Testing, Test Items, Test Format
Areekkuzhiyil, Santhosh – Online Submission, 2021
Assessment is an integral part of any teaching learning process. Assessment has large number of functions to perform, whether it is formative or summative. This paper analyse the issues involved and the areas of concern in the classroom assessment practice and discusses the recent reforms take place. [This paper was published in Edutracks v20 n8…
Descriptors: Student Evaluation, Formative Evaluation, Summative Evaluation, Test Validity
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Patrick Kyllonen; Amit Sevak; Teresa Ober; Ikkyu Choi; Jesse Sparks; Daniel Fishtein – ETS Research Report Series, 2024
Assessment refers to a broad array of approaches for measuring or evaluating a person's (or group of persons') skills, behaviors, dispositions, or other attributes. Assessments range from standardized tests used in admissions, employee selection, licensure examinations, and domestic and international large-scale assessments of cognitive and…
Descriptors: Assessment Literacy, Testing, Test Bias, Test Construction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Joseph, Dane Christian – Journal of Effective Teaching in Higher Education, 2019
Multiple-choice testing is a staple within the U.S. higher education system. From classroom assessments to standardized entrance exams such as the GRE, GMAT, or LSAT, test developers utilize a variety of validated and heuristic driven item-writing guidelines. One such guideline that has been given recent attention is to randomize the position of…
Descriptors: Test Construction, Multiple Choice Tests, Guessing (Tests), Test Wiseness
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Moses, Tim – ETS Research Report Series, 2013
The purpose of this report is to review ETS psychometric contributions that focus on test scores. Two major sections review contributions based on assessing test scores' measurement characteristics and other contributions about using test scores as predictors in correlational and regression relationships. An additional section reviews additional…
Descriptors: Psychometrics, Scores, Correlation, Regression (Statistics)
Lee, Eunjung; Lee, Won-Chan; Brennan, Robert L. – College Board, 2012
In almost all high-stakes testing programs, test equating is necessary to ensure that test scores across multiple test administrations are equivalent and can be used interchangeably. Test equating becomes even more challenging in mixed-format tests, such as Advanced Placement Program® (AP®) Exams, that contain both multiple-choice and constructed…
Descriptors: Test Construction, Test Interpretation, Test Norms, Test Reliability
Peer reviewed Peer reviewed
PDF on ERIC Download full text
O'Reilly, Tenaha; Sabatini, John – ETS Research Report Series, 2013
This paper represents the third installment of the Reading for Understanding (RfU) assessment framework. This paper builds upon the two prior installments (Sabatini & O'Reilly, 2013; Sabatini, O'Reilly, & Deane, 2013) by discussing the role of performance moderators in the test design and how scenario-based assessment can be used as a tool…
Descriptors: Reading Comprehension, Reading Tests, Test Construction, Student Characteristics
Wolf, Raffaela; Zahner, Doris; Kostoris, Fiorella; Benjamin, Roger – Council for Aid to Education, 2014
The measurement of higher-order competencies within a tertiary education system across countries presents methodological challenges due to differences in educational systems, socio-economic factors, and perceptions as to which constructs should be assessed (Blömeke, Zlatkin-Troitschanskaia, Kuhn, & Fege, 2013). According to Hart Research…
Descriptors: Case Studies, International Assessment, Performance Based Assessment, Critical Thinking
Peer reviewed Peer reviewed
PDF on ERIC Download full text
National Center for Education Statistics, 2007
The purpose of this document is to provide background information that will be useful in interpreting the 2007 results from the Trends in International Mathematics and Science Study (TIMSS) by comparing its design, features, framework, and items with those of the U.S. National Assessment of Educational Progress and another international assessment…
Descriptors: National Competency Tests, Comparative Analysis, Achievement Tests, Test Items
Rentz, R. Robert – New Directions for Testing and Measurement, 1981
In this speculative article, Robert R. Rentz predicts that state educational assessment programs will continue to be fertile ground for measurement innovations, both technological and otherwise, in the areas of test content, development, administration and the reporting of test results. (AEF)
Descriptors: Educational Innovation, Prediction, State Programs, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Guess, Pamela – Journal of Psychoeducational Assessment, 2006
The OMNI Personality Inventory (OMNI) is a self-report questionnaire designed for use with adolescents and adults between 18 and 74 years of age. The questionnaire is not based on a particular theory, consistent with current trends in test development, according to the author. An abbreviated form of the OMNI, the OMNI-IV Personality Disorder…
Descriptors: Personality Measures, Questionnaires, Adolescents, Adults
Rhode Island Department of Elementary and Secondary Education, 2007
This handbook will assist principals and school testing coordinators in implementing the spring 2007 administration of the Developmental Reading Assessment (DRA). Information regarding administration timeline, reporting, process, online tools and contact personnel is discussed. Contents include: (1) Scheduling; (2) Identify Primary Test…
Descriptors: Testing Accommodations, Alternative Assessment, Educational Testing, Guidance Programs
Peer reviewed Peer reviewed
Kempa, R. F.; L'Odiaga, J. – Educational Research, 1984
Examines the extent to which grades derived from a conventional norm-referenced examination can be interpreted in terms of criterion-referenced performance assessments of different abilities and skills. Results suggest that performance is more affected by test format and subject matter than by the intellectual abilities tested by them. (JOW)
Descriptors: Criterion Referenced Tests, Norm Referenced Tests, Test Construction, Test Format
Shermis, Mark D.; DiVesta, Francis J. – Rowman & Littlefield Publishers, Inc., 2011
"Classroom Assessment in Action" clarifies the multi-faceted roles of measurement and assessment and their applications in a classroom setting. Comprehensive in scope, Shermis and Di Vesta explain basic measurement concepts and show students how to interpret the results of standardized tests. From these basic concepts, the authors then…
Descriptors: Student Evaluation, Standardized Tests, Scores, Measurement
Peer reviewed Peer reviewed
Hsu, Louis M. – Multivariate Behavioral Research, 1994
Item overlap coefficient (IOC) formulas are discussed, providing six warnings about their calculation and interpretation and some explanations of why item overlap influences the Minnesota Multiphasic Personality Inventory and the Millon Clinical Multiaxial Inventory factor structures. (SLD)
Descriptors: Correlation, Definitions, Equations (Mathematics), Factor Structure
Previous Page | Next Page »
Pages: 1  |  2  |  3