Publication Date
| In 2026 | 0 |
| Since 2025 | 197 |
| Since 2022 (last 5 years) | 1067 |
| Since 2017 (last 10 years) | 2577 |
| Since 2007 (last 20 years) | 4938 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 653 |
| Teachers | 563 |
| Researchers | 250 |
| Students | 201 |
| Administrators | 81 |
| Policymakers | 22 |
| Parents | 17 |
| Counselors | 8 |
| Community | 7 |
| Support Staff | 3 |
| Media Staff | 1 |
| More ▼ | |
Location
| Turkey | 225 |
| Canada | 223 |
| Australia | 155 |
| Germany | 116 |
| United States | 99 |
| China | 90 |
| Florida | 86 |
| Indonesia | 82 |
| Taiwan | 78 |
| United Kingdom | 73 |
| California | 65 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 4 |
| Meets WWC Standards with or without Reservations | 4 |
| Does not meet standards | 1 |
Flynn, Alison B.; Featherstone, Ryan B. – Chemistry Education Research and Practice, 2017
This study investigated students' successes, strategies, and common errors in their answers to questions that involved the electron-pushing (curved arrow) formalism (EPF), part of organic chemistry's language. We analyzed students' answers to two question types on midterms and final exams: (1) draw the electron-pushing arrows of a reaction step,…
Descriptors: Organic Chemistry, Error Patterns, Science Tests, Test Items
Gallagher, Kristel M. – College Teaching, 2017
Students often struggle to recall information on tests, frequently claiming to experience a "retrieval failure" of learned information. Thus, the retrieval of information from memory may be a roadblock to student success. I propose a relatively simple adjustment to the wording of test items to help eliminate this potential barrier.…
Descriptors: Cues, Tests, Recall (Psychology), Test Items
Sinharay, Sandip – Journal of Educational Measurement, 2017
Person-fit assessment (PFA) is concerned with uncovering atypical test performance as reflected in the pattern of scores on individual items on a test. Existing person-fit statistics (PFSs) include both parametric and nonparametric statistics. Comparison of PFSs has been a popular research topic in PFA, but almost all comparisons have employed…
Descriptors: Goodness of Fit, Testing, Test Items, Scores
Smith, William Zachary; Dickenson, Tammiee S.; Rogers, Bradley David – AERA Online Paper Repository, 2017
Questionnaire refinement and a process for selecting items for elimination are important tools for survey developers. One of the major obstacles in questionnaire refinement and elimination in surveys lies in one's ability to adequately and appropriately reconstruct a survey. Often times, surveys can be long and strenuous on the respondent,…
Descriptors: Surveys, Psychometrics, Test Construction, Test Reliability
Liu, Ou Lydia; Rios, Joseph A.; Heilman, Michael; Gerard, Libby; Linn, Marcia C. – Journal of Research in Science Teaching, 2016
Constructed response items can both measure the coherence of student ideas and serve as reflective experiences to strengthen instruction. We report on new automated scoring technologies that can reduce the cost and complexity of scoring constructed-response items. This study explored the accuracy of c-rater-ML, an automated scoring engine…
Descriptors: Science Tests, Scoring, Automation, Validity
A Comparison of Four Differential Item Functioning Procedures in the Presence of Multidimensionality
Bastug, Özlem Yesim Özbek – Educational Research and Reviews, 2016
Differential item functioning (DIF), or item bias, is a relatively new concept. It has been one of the most controversial and the most studied subject in measurement theory. DIF occurs when people who have the same ability level but from different groups have a different probability of a correct response. According to Item Response Theory (IRT),…
Descriptors: Test Bias, Comparative Analysis, Item Response Theory, Regression (Statistics)
Irwin, Clare W.; Stafford, Erin T. – Regional Educational Laboratory Northeast & Islands, 2016
This guide describes a five-step collaborative process that educators can use with other educators, researchers, and content experts to write or adapt questions and develop surveys for education contexts. This process allows educators to leverage the expertise of individuals within and outside of their organization to ensure a high-quality survey…
Descriptors: Surveys, Test Construction, Educational Cooperation, Test Items
Lindner, Marlit A.; Schult, Johannes; Mayer, Richard E. – Journal of Educational Psychology, 2022
This classroom experiment investigates the effects of adding representational pictures to multiple-choice and constructed-response test items to understand the role of the response format for the multimedia effect in testing. Participants were 575 fifth- and sixth-graders who answered 28 science test items--seven items in each of four experimental…
Descriptors: Elementary School Students, Grade 5, Grade 6, Multimedia Materials
Min, Shangchao; Bishop, Kyoungwon; Gary Cook, Howard – Language Testing, 2022
This study explored the interplay between content knowledge and reading ability in a large-scale multistage adaptive English for academic purposes (EAP) reading assessment at a range of ability levels across 1-12 graders. The datasets for this study were item-level responses to the reading tests of ACCESS for ELLs Online 2.0. A sample of 10,000…
Descriptors: Item Response Theory, English Language Learners, Correlation, Reading Ability
Alhadi, Moosa A. A.; Zhang, Dake; Wang, Ting; Maher, Carolyn A. – North American Chapter of the International Group for the Psychology of Mathematics Education, 2022
This research synthesizes studies that used a Digitalized Interactive Component (DIC) to assess K-12 student mathematics performance during Computer-based-Assessments (CBAs) in mathematics. A systematic search identified ten studies that categorized existing DICs according to the tools that provided language assistance to students and tools that…
Descriptors: Computer Assisted Testing, Mathematics Tests, English Language Learners, Geometry
Pelánek, Radek; Effenberger, Tomáš; Kukucka, Adam – Journal of Educational Data Mining, 2022
We study the automatic identification of educational items worthy of content authors' attention. Based on the results of such analysis, content authors can revise and improve the content of learning environments. We provide an overview of item properties relevant to this task, including difficulty and complexity measures, item discrimination, and…
Descriptors: Item Analysis, Identification, Difficulty Level, Case Studies
Albudoor, Nahar; Peña, Elizabeth D. – Journal of Speech, Language, and Hearing Research, 2022
Purpose: The differential diagnosis of developmental language disorder (DLD) in bilingual children represents a unique challenge due to their distributed language exposure and knowledge. The current evidence indicates that dual-language testing yields the most accurate classification of DLD among bilinguals, but there are limited personnel and…
Descriptors: Language Impairments, Bilingualism, Clinical Diagnosis, Language Tests
Ben Seipel; Sarah E. Carlson; Virginia Clinton-Lisell; Mark L. Davison; Patrick C. Kennedy – Grantee Submission, 2022
Originally designed for students in Grades 3 through 5, MOCCA (formerly the Multiple-choice Online Causal Comprehension Assessment), identifies students who struggle with comprehension, and helps uncover why they struggle. There are many reasons why students might not comprehend what they read. They may struggle with decoding, or reading words…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Diagnostic Tests, Reading Tests
National Academies Press, 2022
The National Assessment of Educational Progress (NAEP) -- often called "The Nation's Report Card" -- is the largest nationally representative and continuing assessment of what students in public and private schools in the United States know and can do in various subjects and has provided policy makers and the public with invaluable…
Descriptors: Costs, Futures (of Society), National Competency Tests, Educational Trends
Andersson, Björn; Xin, Tao – Educational and Psychological Measurement, 2018
In applications of item response theory (IRT), an estimate of the reliability of the ability estimates or sum scores is often reported. However, analytical expressions for the standard errors of the estimators of the reliability coefficients are not available in the literature and therefore the variability associated with the estimated reliability…
Descriptors: Item Response Theory, Test Reliability, Test Items, Scores

Peer reviewed
Direct link
