NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
No Child Left Behind Act 20011
What Works Clearinghouse Rating
Showing 1 to 15 of 181 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Kam, Chester Chun Seng – Educational and Psychological Measurement, 2023
When constructing measurement scales, regular and reversed items are often used (e.g., "I am satisfied with my job"/"I am not satisfied with my job"). Some methodologists recommend excluding reversed items because they are more difficult to understand and therefore engender a second, artificial factor distinct from the…
Descriptors: Test Items, Difficulty Level, Test Construction, Construct Validity
Timothy Donald Folger – ProQuest LLC, 2024
This dissertation aims to bridge the gap between validity theory and the practice of validation. The dissertation employs a three-article approach. Following the introduction in Chapter I, three independent manuscripts representing three empirical studies are presented (i.e., Chapters II - IV). Each chapter is a stand-alone publishable manuscript,…
Descriptors: Educational Testing, Psychological Testing, Test Validity, Delphi Technique
Peer reviewed Peer reviewed
Direct linkDirect link
Xuelan Qiu; Jimmy de la Torre; You-Gan Wang; Jinran Wu – Educational Measurement: Issues and Practice, 2024
Multidimensional forced-choice (MFC) items have been found to be useful to reduce response biases in personality assessments. However, conventional scoring methods for the MFC items result in ipsative data, hindering the wider applications of the MFC format. In the last decade, a number of item response theory (IRT) models have been developed,…
Descriptors: Item Response Theory, Personality Traits, Personality Measures, Personality Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Suthathip Thirakunkovit – Language Testing in Asia, 2025
Establishing a cut score is a crucial aspect of the test development process since the selected cut score has the potential to impact students' performance outcomes and shape instructional strategies within the classroom. Therefore, it is vital for those involved in test development to set a cut score that is both fair and justifiable. This cut…
Descriptors: Cutting Scores, Culture Fair Tests, Language Tests, Test Construction
W. James Popham – Pearson, 2024
"Classroom Assessment" shows pre- and in-service teachers how to use classroom testing accurately and formatively to dramatically increase their teaching effectiveness and promote student learning. In addition to clear and concise guidelines on how to develop and use quality classroom assessments, the author also focuses on the teaching…
Descriptors: Student Evaluation, Testing, Teacher Effectiveness, Test Construction
Zebing Wu – ProQuest LLC, 2024
Response style, one common aberrancy in non-cognitive assessments in psychological fields, is problematic in terms of inaccurate estimation of item and person parameters, which leads to serious reliability, validity, and fairness issues (Baumgartner & Steenkamp, 2001; Bolt & Johnson, 2009; Bolt & Newton, 2011). Response style refers to…
Descriptors: Response Style (Tests), Accuracy, Preferences, Psychological Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Beaujean, A. Alexander; Benson, Nicholas F. – Contemporary School Psychology, 2019
Clinical cognitive ability assessment--and its corollary, score interpretation--are in a state of disarray. Many current instruments are designed to provide a bevy of scores to appeal to a variety of school psychologists. These scores are not all grounded in the attribute's theory or developed from sound measurement or psychometric theory. Thus,…
Descriptors: Cognitive Ability, Scores, School Psychologists, Test Construction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Beck, Klaus – Frontline Learning Research, 2020
Many test developers try to ensure the content validity of their tests by having external experts review the items, e.g. in terms of relevance, difficulty, or clarity. Although this approach is widely accepted, a closer look reveals several pitfalls need to be avoided if experts' advice is to be truly helpful. The purpose of this paper is to…
Descriptors: Content Validity, Psychological Testing, Educational Testing, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Kimmia Lyon; Jessica B. Koslouski; Sandra M. Chafouleas; Amy M. Briesch; Jacqueline M. Caemmerer – Grantee Submission, 2025
Existing educational assessments have typically been developed without appropriate attention to the intended and unintended consequences of measure implementation and interpretation. We are developing the Expanding Screening to Support Youth (ESSY) Whole Child Screener using a mixed methods approach that attends to the intended and unintended…
Descriptors: Student Attitudes, Screening Tests, Validity, Grade 3
Peer reviewed Peer reviewed
Direct linkDirect link
Torres Irribarra, David – Measurement: Interdisciplinary Research and Perspectives, 2017
Maul's paper, "Rethinking Traditional Methods of Survey Validation," is a clever and pointed indictment of a set of specific but widespread practices in psychological measurement and the social sciences at large. Through it, Maul highlights central issues in the way to approach theory building and theory testing, bringing to mind the…
Descriptors: Surveys, Validity, Methods, Psychological Characteristics
Nebraska Department of Education, 2021
This technical report documents the processes and procedures implemented to support the Spring 2021 Nebraska Student-Centered Assessment System (NSCAS) Phase I Pilot in English Language Arts (ELA), Mathematics, and Science assessments by NWEA® under the supervision of the Nebraska Department of Education (NDE). The technical report shows how the…
Descriptors: Psychometrics, Standard Setting, English, Language Arts
Vaske, Jerry J. – Sagamore-Venture, 2019
Data collected from surveys can result in hundreds of variables and thousands of respondents. This implies that time and energy must be devoted to (a) carefully entering the data into a database, (b) running preliminary analyses to identify any problems (e.g., missing data, potential outliers), (c) checking the reliability and validity of the…
Descriptors: Surveys, Theories, Hypothesis Testing, Effect Size
Nebraska Department of Education, 2022
In Winter 2021-2022, the Nebraska Student-Centered Assessment System (NSCAS) assessments are administered in ELA and mathematics in Grades 3-8. In Spring 2021-2022, the NSCAS assessments are administered in English language arts (ELA) and mathematics in Grades 3-8 and in science in Grades 5 and 8. The purposes of the NSCAS assessments are to…
Descriptors: English, Language Arts, Student Centered Learning, Mathematics Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bichi, Ado Abdu; Talib, Rohaya – International Journal of Evaluation and Research in Education, 2018
Testing in educational system perform a number of functions, the results from a test can be used to make a number of decisions in education. It is therefore well accepted in the education literature that, testing is an important element of education. To effectively utilize the tests in educational policies and quality assurance its validity and…
Descriptors: Item Response Theory, Test Items, Test Construction, Decision Making
Peer reviewed Peer reviewed
Direct linkDirect link
Lewis, Todd F. – Measurement and Evaluation in Counseling and Development, 2017
American Educational Research Association (AERA) standards stipulate that researchers show evidence of the internal structure of instruments. Confirmatory factor analysis (CFA) is one structural equation modeling procedure designed to assess construct validity of assessments that has broad applicability for counselors interested in instrument…
Descriptors: Educational Research, Factor Analysis, Structural Equation Models, Construct Validity
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  12  |  13