NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)0
Since 2006 (last 20 years)11
Audience
Researchers1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 13 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ferrando, Pere J. – Psicologica: International Journal of Methodology and Experimental Psychology, 2012
Model-based attempts to rigorously study the broad and imprecise concept of "discriminating power" are scarce, and generally limited to nonlinear models for binary responses. This paper proposes a comprehensive framework for assessing the discriminating power of item and test scores which are analyzed or obtained using Spearman's…
Descriptors: Student Evaluation, Psychometrics, Test Items, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Camilli, Gregory – Educational Research and Evaluation, 2013
In the attempt to identify or prevent unfair tests, both quantitative analyses and logical evaluation are often used. For the most part, fairness evaluation is a pragmatic attempt at determining whether procedural or substantive due process has been accorded to either a group of test takers or an individual. In both the individual and comparative…
Descriptors: Alternative Assessment, Test Bias, Test Content, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Usener, Claus A.; Majchrzak, Tim A.; Kuchen, Herbert – Interactive Technology and Smart Education, 2012
Purpose: To overcome the high manual effort of assessments for teaching personnel, e-assessment systems are used to assess students using information systems (IS). The purpose of this paper is to propose an extension of EASy, a system for e-assessment of exercises that require higher-order cognitive skills. The latest module allows assessing…
Descriptors: Foreign Countries, Computer Software, Computer Software Evaluation, Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Hopstock, Paul J.; Pelczar, Marisa P. – National Center for Education Statistics, 2011
This technical report and user's guide is designed to provide researchers with an overview of the design and implementation of the 2009 Program for International Student Assessment (PISA), as well as with information on how to access the PISA 2009 data. This information is meant to supplement that presented in Organization for Economic Cooperation…
Descriptors: Parent Materials, Academic Achievement, Measures (Individuals), Program Effectiveness
Peer reviewed Peer reviewed
Direct linkDirect link
Koong, Chorng-Shiuh; Wu, Chi-Ying – Computers & Education, 2010
Multiple intelligences, with its hypothesis and implementation, have ascended to a prominent status among the many instructional methodologies. Meanwhile, pedagogical theories and concepts are in need of more alternative and interactive assessments to prove their prevalence (Kinugasa, Yamashita, Hayashi, Tominaga, & Yamasaki, 2005). In general,…
Descriptors: Multiple Intelligences, Test Items, Grading, Programming
National Assessment Governing Board, 2008
An assessment framework is like a blueprint, laying out the basic design of the assessment by describing the mathematics content that should be tested and the types of assessment questions that should be included. It also describes how the various design factors should be balanced across the assessment. This is an assessment framework, not a…
Descriptors: Test Items, Student Evaluation, National Competency Tests, Data Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Hertenstein, Matthew J.; Wayand, Joseph F. – Journal of Instructional Psychology, 2008
Many psychology instructors present videotaped examples of behavior at least occasionally during their courses. However, few include video clips during examinations. We provide examples of video-based questions, offer guidelines for their use, and discuss their benefits and drawbacks. In addition, we provide empirical evidence to support the use…
Descriptors: Student Evaluation, Video Technology, Evaluation Methods, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Frey, Andreas; Hartig, Johannes; Rupp, Andre A. – Educational Measurement: Issues and Practice, 2009
In most large-scale assessments of student achievement, several broad content domains are tested. Because more items are needed to cover the content domains than can be presented in the limited testing time to each individual student, multiple test forms or booklets are utilized to distribute the items to the students. The construction of an…
Descriptors: Measures (Individuals), Test Construction, Theory Practice Relationship, Design
Peer reviewed Peer reviewed
Direct linkDirect link
Marks, Anthony M.; Cronje, Johannes C. – Educational Technology & Society, 2008
Computer-based assessments are becoming more commonplace, perhaps as a necessity for faculty to cope with large class sizes. These tests often occur in large computer testing venues in which test security may be compromised. In an attempt to limit the likelihood of cheating in such venues, randomised presentation of items is automatically…
Descriptors: Educational Assessment, Educational Testing, Research Needs, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Al-A'ali, Mansoor – Educational Technology & Society, 2007
Computer adaptive testing is the study of scoring tests and questions based on assumptions concerning the mathematical relationship between examinees' ability and the examinees' responses. Adaptive student tests, which are based on item response theory (IRT), have many advantages over conventional tests. We use the least square method, a…
Descriptors: Educational Testing, Higher Education, Elementary Secondary Education, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Guzman, Eduardo; Conejo, Ricardo; Garcia-Hervas; Emilio – Educational Technology & Society, 2005
SIETTE is a web-based adaptive testing system. It implements Computerized Adaptive Tests. These tests are tailor-made, theory-based tests, where questions shown to students, finalization of the test, and student knowledge estimation is accomplished adaptively. To construct these tests, SIETTE has an authoring environment comprising a suite of…
Descriptors: Adaptive Testing, Computer Assisted Testing, Test Construction, Test Items
Webb, Noreen; Herman, Joan; Webb, Norman – National Center for Research on Evaluation, Standards, and Student Testing (CRESST), 2006
In this report we explore the role of reviewer agreement in judgments about alignment between tests and standards. Specifically, we consider approaches to describing alignment that incorporate reviewer agreement information in different ways. The essential questions were whether and how taking into account reviewer agreement changes the picture of…
Descriptors: Academic Standards, Achievement Tests, Standardized Tests, Student Evaluation
Dutcher, Peggy – 1990
Authentic reading assessment is examined, focusing on its implementation within the Michigan Essential Skills Reading Test (MESRT). Authentic reading assessment emerged as a response to research that indicates that reading is not a particular skill but an interaction among reader, text, and the context of the reading situation. Unlike formal…
Descriptors: Alternative Assessment, Elementary Secondary Education, Evaluation Research, Multiple Choice Tests