Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 4 |
Descriptor
Test Items | 4 |
Test Selection | 4 |
Test Construction | 3 |
Item Analysis | 2 |
Psychometrics | 2 |
Test Bias | 2 |
Academic Accommodations… | 1 |
Access to Education | 1 |
Adaptive Testing | 1 |
Alignment (Education) | 1 |
Benchmarking | 1 |
More ▼ |
Source
Assessment and Accountability… | 1 |
International Journal for the… | 1 |
Journal of Applied Testing… | 1 |
ProQuest LLC | 1 |
Author
Ackermann, Richard | 1 |
Brownell, Sara E. | 1 |
Cooper, Katelyn M. | 1 |
Dietel, Ronald | 1 |
Eguez, Jane | 1 |
Ganguli, Debalina | 1 |
Herman, Joan L. | 1 |
Huang, Austin L. | 1 |
Jacobsen, Jared | 1 |
Keiffer, Elizabeth Ann | 1 |
Osmundson, Ellen | 1 |
More ▼ |
Publication Type
Journal Articles | 2 |
Dissertations/Theses -… | 1 |
Reports - Descriptive | 1 |
Reports - Evaluative | 1 |
Reports - Research | 1 |
Tests/Questionnaires | 1 |
Education Level
Elementary Secondary Education | 2 |
Higher Education | 1 |
Audience
Location
Singapore | 1 |
Laws, Policies, & Programs
No Child Left Behind Act 2001 | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Wright, Christian D.; Huang, Austin L.; Cooper, Katelyn M.; Brownell, Sara E. – International Journal for the Scholarship of Teaching and Learning, 2018
College instructors in the United States usually make their own decisions about how to design course exams. Even though summative course exams are well known to be important to student success, we know little about the decision making of instructors when designing course exams. To probe how instructors design exams for introductory biology, we…
Descriptors: College Faculty, Science Teachers, Science Tests, Teacher Made Tests
Keiffer, Elizabeth Ann – ProQuest LLC, 2011
A differential item functioning (DIF) simulation study was conducted to explore the type and level of impact that contamination had on type I error and power rates in DIF analyses when the suspect item favored the same or opposite group as the DIF items in the matching subtest. Type I error and power rates were displayed separately for the…
Descriptors: Test Items, Sample Size, Simulation, Identification
Jacobsen, Jared; Ackermann, Richard; Eguez, Jane; Ganguli, Debalina; Rickard, Patricia; Taylor, Linda – Journal of Applied Testing Technology, 2011
A computer adaptive test (CAT) is a delivery methodology that serves the larger goals of the assessment system in which it is embedded. A thorough analysis of the assessment system for which a CAT is being designed is critical to ensure that the delivery platform is appropriate and addresses all relevant complexities. As such, a CAT engine must be…
Descriptors: Delivery Systems, Testing Programs, Computer Assisted Testing, Foreign Countries
Herman, Joan L.; Osmundson, Ellen; Dietel, Ronald – Assessment and Accountability Comprehensive Center, 2010
This report describes the purposes of benchmark assessments and provides recommendations for selecting and using benchmark assessments--addressing validity, alignment, reliability, fairness and bias and accessibility, instructional sensitivity, utility, and reporting issues. We also present recommendations on building capacity to support schools'…
Descriptors: Multiple Choice Tests, Test Items, Benchmarking, Educational Assessment