NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Does not meet standards1
Showing 1 to 15 of 312 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Sandra Camargo Salamanca; Maria Elena Oliveri; April L. Zenisky – International Journal of Testing, 2025
This article describes the 2022 "ITC/ATP Guidelines for Technology-Based Assessment" (TBA), a collaborative effort by the International Test Commission (ITC) and the Association of Test Publishers (ATP) to address digital assessment challenges. Developed by over 100 global experts, these "Guidelines" emphasize fairness,…
Descriptors: Guidelines, Standards, Technology Uses in Education, Computer Assisted Testing
Chris Jellis – Research Matters, 2024
The Centre for Evaluation and Monitoring (CEM), based in the North of England, recently celebrated its 40th birthday. Arising from an evaluation project at Newcastle University, and a subsequent move to Durham University, it rapidly grew in scope and influence, developing a series of highly regarded school assessments. For a relatively small…
Descriptors: Educational Assessment, Foreign Countries, Computer Assisted Testing, Adaptive Testing
ETS Research Institute, 2024
ETS experts are exploring and defining the standards for responsible AI use in assessments. A comprehensive framework and principles will be unveiled in the coming months. In the meantime, this document outlines the critical areas these standards will encompass, including the principles of: (1) Fairness and bias mitigation; (2) Privacy and…
Descriptors: Artificial Intelligence, Computer Assisted Testing, Educational Testing, Ethics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
National Assessment of Educational Progress (NAEP), 2023
The National Assessment of Educational Progress (NAEP) is an integral measure of academic progress across the nation and over time. It is the largest nationally representative and continuing assessment of what our nation's students know and can do in various subjects such as mathematics, reading, and science. The program also provides valuable…
Descriptors: School Districts, Educational Assessment, Grade 4, Grade 8
Peer reviewed Peer reviewed
Direct linkDirect link
Gardner, John; O'Leary, Michael; Yuan, Li – Journal of Computer Assisted Learning, 2021
Artificial Intelligence is at the heart of modern society with computers now capable of making process decisions in many spheres of human activity. In education, there has been intensive growth in systems that make formal and informal learning an anytime, anywhere activity for billions of people through online open educational resources and…
Descriptors: Artificial Intelligence, Educational Assessment, Formative Evaluation, Summative Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Sinharay, Sandip – Educational Measurement: Issues and Practice, 2021
Technical difficulties occasionally lead to missing item scores and hence to incomplete data on computerized tests. It is not straightforward to report scores to the examinees whose data are incomplete due to technical difficulties. Such reporting essentially involves imputation of missing scores. In this paper, a simulation study based on data…
Descriptors: Data Analysis, Scores, Educational Assessment, Educational Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Stephen G. Sireci; Javier Suárez-Álvarez; April L. Zenisky; Maria Elena Oliveri – Grantee Submission, 2024
The goal in personalized assessment is to best fit the needs of each individual test taker, given the assessment purposes. Design-In-Real-Time (DIRTy) assessment reflects the progressive evolution in testing from a single test, to an adaptive test, to an adaptive assessment "system." In this paper, we lay the foundation for DIRTy…
Descriptors: Educational Assessment, Student Needs, Test Format, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Stephen G. Sireci; Javier Suárez-Álvarez; April L. Zenisky; Maria Elena Oliveri – Educational Measurement: Issues and Practice, 2024
The goal in personalized assessment is to best fit the needs of each individual test taker, given the assessment purposes. Design-in-Real-Time (DIRTy) assessment reflects the progressive evolution in testing from a single test, to an adaptive test, to an adaptive assessment "system." In this article, we lay the foundation for DIRTy…
Descriptors: Educational Assessment, Student Needs, Test Format, Test Construction
Olson, John F.; Lazarus, Sheryl S.; Thurlow, Martha L.; Quanbeck, Mari – National Center on Educational Outcomes, 2021
This report provides a snapshot of how accommodated tests for students with disabilities, accessibility, alternate assessments, and other related issues were addressed in states' test security policies for 2020-21. Strong test security policies and procedures are needed to help ensure the integrity and validity of state assessments, yet some test…
Descriptors: Testing Accommodations, Information Security, State Policy, Policy Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
David Eubanks; Scott A. Moore – Assessment Update, 2025
Assessment and institutional research offices have too much data and too little time. Standard reporting often crowds out opportunities for innovative research. Fortunately, advancements in data science now offer a clear solution. It is equal parts technique and philosophy. The first and easiest step is to modernize data work. This column…
Descriptors: Higher Education, Educational Assessment, Data Science, Research Methodology
Florida Department of Education, 2022
All Florida schools teach the Florida Standards in English Language Arts (ELA) and Mathematics and the Next Generation Sunshine State Standards (NGSSS) in Science and Social Studies. Student performance provides important information to parents/guardians, teachers, policy makers, and the general public regarding how well students are learning.…
Descriptors: Educational Assessment, State Standards, Language Arts, Mathematics
Peer reviewed Peer reviewed
Direct linkDirect link
Jewsbury, Paul A.; van Rijn, Peter W. – Journal of Educational and Behavioral Statistics, 2020
In large-scale educational assessment data consistent with a simple-structure multidimensional item response theory (MIRT) model, where every item measures only one latent variable, separate unidimensional item response theory (UIRT) models for each latent variable are often calibrated for practical reasons. While this approach can be valid for…
Descriptors: Item Response Theory, Computation, Test Items, Adaptive Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Yang Du; Susu Zhang – Journal of Educational and Behavioral Statistics, 2025
Item compromise has long posed challenges in educational measurement, jeopardizing both test validity and test security of continuous tests. Detecting compromised items is therefore crucial to address this concern. The present literature on compromised item detection reveals two notable gaps: First, the majority of existing methods are based upon…
Descriptors: Item Response Theory, Item Analysis, Bayesian Statistics, Educational Assessment
Jimenez, Laura; Modaffari, Jamil – Center for American Progress, 2021
Assessments are a way for stakeholders in education to understand what students know and can do. They can take many forms, including but not limited to paper and pencil or computer-adaptive formats. However, assessments do not have to be tests in the traditional sense at all; rather, they can be carried out through teacher observations of students…
Descriptors: Equal Education, Elementary Secondary Education, Futures (of Society), Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Stadler, Matthias; Brandl, Laura; Greiff, Samuel – Journal of Computer Assisted Learning, 2023
Background: Over the last 20 years, educational large-scale assessments have undergone dramatic changes moving away from simple paper-pencil assessments to innovative, technology-based assessments. This comprehensive switch has led to some rather technical improvements such as identifying early guessing or improving standardization. Objectives: At…
Descriptors: Interaction, Measurement, Data Processing, Sustainability
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  21