Publication Date
| In 2026 | 0 |
| Since 2025 | 215 |
| Since 2022 (last 5 years) | 1084 |
| Since 2017 (last 10 years) | 2594 |
| Since 2007 (last 20 years) | 4955 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 653 |
| Teachers | 563 |
| Researchers | 250 |
| Students | 201 |
| Administrators | 81 |
| Policymakers | 22 |
| Parents | 17 |
| Counselors | 8 |
| Community | 7 |
| Support Staff | 3 |
| Media Staff | 1 |
| More ▼ | |
Location
| Turkey | 226 |
| Canada | 223 |
| Australia | 155 |
| Germany | 116 |
| United States | 99 |
| China | 90 |
| Florida | 86 |
| Indonesia | 82 |
| Taiwan | 78 |
| United Kingdom | 73 |
| California | 66 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 4 |
| Meets WWC Standards with or without Reservations | 4 |
| Does not meet standards | 1 |
Suwita Suwita; Sulistyo Saputro; Sajidan Sajidan; Sutarno Sutarno – Journal of Baltic Science Education, 2024
The current study uses the Rasch Model to measure lower-secondary school students' critical thinking skills on photosynthesis topics. Critical thinking skills are considered essential in science education, but few valid and practical measurement instruments remain. The current study fills the gap by adapting the instrument from the Watson-Glaser…
Descriptors: Secondary School Students, Critical Thinking, Thinking Skills, Botany
Stuart A. Miller; Sara J. Finney – Assessment Update, 2024
A simple act of motivation priming can significantly impact the validity of test results, which is crucial for institutional accountability. The progression of the studies discussed in this article illustrates a clear trajectory of building upon previous findings to refine and expand the understanding of motivation priming when gathering…
Descriptors: Student Behavior, Student Motivation, Intervention, Behavior Modification
Huelmann, Thorben; Debelak, Rudolf; Strobl, Carolin – Journal of Educational Measurement, 2020
This study addresses the topic of how anchoring methods for differential item functioning (DIF) analysis can be used in multigroup scenarios. The direct approach would be to combine anchoring methods developed for two-group scenarios with multigroup DIF-detection methods. Alternatively, multiple tests could be carried out. The results of these…
Descriptors: Test Items, Test Bias, Equated Scores, Item Analysis
Ashley J. Harrison; Matthew Madison; Nilofer Naqvi; Karrah Bowman; Jonathan Campbell – Autism: The International Journal of Research and Practice, 2025
The Autism Stigma and Knowledge Questionnaire (ASK-Q) was developed and validated to assess autism knowledge across cultural contexts. Given the wide international use of the measure, the current study aimed to re-examine the measurement properties of the ASK-Q. Using a large, international database (n = 5064), psychometric analyses examined both…
Descriptors: Autism Spectrum Disorders, Social Bias, Attitudes toward Disabilities, Questionnaires
Bryan R. Drost; Char Shryock – Phi Delta Kappan, 2025
Creating assessment questions aligned to standards is a time-consuming task for teachers, but large language models such as ChatGPT can help. Bryan Drost & Char Shryock describe a three-step process for using ChatGPT to create assessments: 1) Ask ChatGPT to break standards into measurable targets. 2) Determine how much time to spend on each…
Descriptors: Artificial Intelligence, Computer Software, Technology Integration, Teaching Methods
Fadime Hatice Inci; Ferhat Çelik – Psychology in the Schools, 2025
The aim of this study is to examine the validity, reliability, and responsiveness of the Turkish version of the Adolescent Health Promotion-Short Form (AHP-SF). This cross-sectional study was completed with 1483 students. Confirmatory factor analysis (CFA) supported the construct validity of the scale, demonstrating a good model fit with…
Descriptors: Foreign Countries, Measures (Individuals), Adolescents, Health Promotion
Fatih Orçan – International Journal of Assessment Tools in Education, 2025
Factor analysis is a statistical method to explore the relationships among observed variables and identify latent structures. It is crucial in scale development and validity analysis. Key factors affecting the accuracy of factor analysis results include the type of data, sample size, and the number of response categories. While some studies…
Descriptors: Factor Analysis, Factor Structure, Item Response Theory, Sample Size
Schweizer, Karl; Wang, Tengfei; Ren, Xuezhu – Journal of Experimental Education, 2022
The essay reports two studies on confirmatory factor analysis of speeded data with an effect of selective responding. This response strategy leads test takers to choose their own working order instead of completing the items along with the given order. Methods for detecting speededness despite such a deviation from the given order are proposed and…
Descriptors: Factor Analysis, Response Style (Tests), Decision Making, Test Items
Hyland, Diarmaid; O'Shea, Ann – Teaching Mathematics and Its Applications, 2022
In this study, we conducted a survey of all tertiary level institutions in Ireland to find out how many of them use diagnostic tests, and what kind of mathematical content areas and topics appear on these tests. The information gathered provides an insight into what instructors expect students to know on entry to university and what they expect…
Descriptors: Foreign Countries, Diagnostic Tests, Mathematics Tests, College Freshmen
Arikan, Serkan; Erktin, Emine; Pesen, Melek – International Journal of Science and Mathematics Education, 2022
The aim of this study is to construct a STEM competencies assessment framework and provide validity evidence by empirically testing its structure. Common interdisciplinary assessment frameworks for STEM seem to be scarce in the literature. Many studies use students' mathematics or science scores obtained from large-scale assessments or exams to…
Descriptors: STEM Education, Competence, Interdisciplinary Approach, Test Construction
Ally, Said – International Journal of Education and Development using Information and Communication Technology, 2022
Moodle software has become the heart of teaching and learning services in education. The software is viewed as a trusted modern platform for transforming learning and teaching modes from conventional face-to-face to fully online classes. However, its use for online examination is very limited despite having a state-of-the-art Quiz Module with…
Descriptors: Integrated Learning Systems, Computer Assisted Testing, Information Security, Evaluation Methods
van der Linden, Wim J. – Journal of Educational and Behavioral Statistics, 2022
Two independent statistical tests of item compromise are presented, one based on the test takers' responses and the other on their response times (RTs) on the same items. The tests can be used to monitor an item in real time during online continuous testing but are also applicable as part of post hoc forensic analysis. The two test statistics are…
Descriptors: Test Items, Item Analysis, Item Response Theory, Computer Assisted Testing
Student, Sanford R.; Gong, Brian – Educational Measurement: Issues and Practice, 2022
We address two persistent challenges in large-scale assessments of the Next Generation Science Standards: (a) the validity of score interpretations that target the standards broadly and (b) how to structure claims for assessments of this complex domain. The NGSS pose a particular challenge for specifying claims about students that evidence from…
Descriptors: Science Tests, Test Validity, Test Items, Test Construction
Jiajing Huang – ProQuest LLC, 2022
The nonequivalent-groups anchor-test (NEAT) data-collection design is commonly used in large-scale assessments. Under this design, different test groups take different test forms. Each test form has its own unique items and all test forms share a set of common items. If item response theory (IRT) models are applied to analyze the test data, the…
Descriptors: Item Response Theory, Test Format, Test Items, Test Construction
Wang, Weimeng – ProQuest LLC, 2022
Recent advancements in testing differential item functioning (DIF) have greatly relaxed restrictions made by the conventional multiple group item response theory (IRT) model with respect to the number of grouping variables and the assumption of predefined DIF-free anchor items. The application of the L[subscript 1] penalty in DIF detection has…
Descriptors: Factor Analysis, Item Response Theory, Statistical Inference, Item Analysis

Peer reviewed
Direct link
