NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)5
Since 2006 (last 20 years)23
Audience
Laws, Policies, & Programs
No Child Left Behind Act 20012
What Works Clearinghouse Rating
Showing 1 to 15 of 39 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Griffith, Stafford Alexander – Quality Assurance in Education: An International Perspective, 2017
Purpose: The purpose of this paper is to show how higher education institutions in the Caribbean may benefit from the quality assurance measures implemented by the Caribbean Examinations Council (CXC). Design/methodology/approach: The paper uses an outcomes model of quality assurance to analyse the measures implemented by the CXC to assure quality…
Descriptors: Higher Education, Quality Assurance, Testing Programs, Educational Quality
Peer reviewed Peer reviewed
Direct linkDirect link
Tindal, Gerald; Nese, Joseph F. T.; Stevens, Joseph J. – Educational Assessment, 2017
For the past decade, the accountability model associated with No Child Left Behind (NCLB) emphasized proficiency on end of year tests; with Every Student Succeeds Act (ESSA) the emphasis on proficiency within statewide testing programs, though now integrated with other measures of student learning, nevertheless remains a primary metric for…
Descriptors: Testing Programs, Middle School Students, Models, State Standards
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Simui, Francis; Chibale, Henry; Namangala, Boniface – Open Praxis, 2017
This paper focuses on the management of distance education examination in a lowly resourced North-Eastern region of Zambia. The study applies Hermeneutic Phenomenology approach to generate and make sense of the data. It is the lived experiences of 2 invigilators and 66 students purposively selected that the study draws its insights from. Meaning…
Descriptors: Distance Education, Phenomenology, Testing Programs, Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Chen, Qian – International Journal of Science and Mathematics Education, 2014
In this study, the Trends in International Mathematics and Science Study 2007 data were used to build mathematics achievement models of fourth graders in two East Asian school systems: Hong Kong and Singapore. In each school system, eight variables at student level and nine variables at school/class level were incorporated to build an achievement…
Descriptors: Foreign Countries, Mathematics Achievement, Grade 4, Mathematics Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Alonzo, Alicia C.; Ke, Li – Measurement: Interdisciplinary Research and Perspectives, 2016
A new vision of science learning described in the "Next Generation Science Standards"--particularly the science and engineering practices and their integration with content--pose significant challenges for large-scale assessment. This article explores what might be learned from advances in large-scale science assessment and…
Descriptors: Science Achievement, Science Tests, Group Testing, Accountability
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sabatini, John; O'Reilly, Tenaha; Deane, Paul – ETS Research Report Series, 2013
This report describes the foundation and rationale for a framework designed to measure reading literacy. The aim of the effort is to build an assessment system that reflects current theoretical conceptions of reading and is developmentally sensitive across a prekindergarten to 12th grade student range. The assessment framework is intended to…
Descriptors: Reading Tests, Literacy, Models, Testing Programs
Peer reviewed Peer reviewed
Direct linkDirect link
Debeer, Dries; Buchholz, Janine; Hartig, Johannes; Janssen, Rianne – Journal of Educational and Behavioral Statistics, 2014
In this article, the change in examinee effort during an assessment, which we will refer to as persistence, is modeled as an effect of item position. A multilevel extension is proposed to analyze hierarchically structured data and decompose the individual differences in persistence. Data from the 2009 Program of International Student Achievement…
Descriptors: Reading Tests, International Programs, Testing Programs, Individual Differences
Peer reviewed Peer reviewed
Direct linkDirect link
Assouline, Susan G.; Lupkowski-Shoplik, Ann – Journal of Psychoeducational Assessment, 2012
The Talent Search model, founded at Johns Hopkins University by Dr. Julian C. Stanley, is fundamentally an above-level testing program. This simplistic description belies the enduring impact that the Talent Search model has had on the lives of hundreds of thousands of gifted students as well as their parents and teachers. In this article, we…
Descriptors: Testing Programs, Academically Gifted, Elementary Secondary Education, Talent
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Conti, Maria; LaMance, Rachel; Miller-Cochran, Susan – Composition Forum, 2017
To address the needs and interests of primary stakeholders in a writing program, this article presents a model of "grassroots" assessment that involves instructors from all ranks as well as students in the development, facilitation, and interpretation of assessment results. The authors describe two assessment plans that measured student…
Descriptors: Writing Improvement, Needs Assessment, Stakeholders, Student Needs
Peer reviewed Peer reviewed
Direct linkDirect link
Albano, Anthony D. – Journal of Educational Measurement, 2013
In many testing programs it is assumed that the context or position in which an item is administered does not have a differential effect on examinee responses to the item. Violations of this assumption may bias item response theory estimates of item and person parameters. This study examines the potentially biasing effects of item position. A…
Descriptors: Test Items, Item Response Theory, Test Format, Questioning Techniques
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Becker, Kirk A.; Bergstrom, Betty A. – Practical Assessment, Research & Evaluation, 2013
The need for increased exam security, improved test formats, more flexible scheduling, better measurement, and more efficient administrative processes has caused testing agencies to consider converting the administration of their exams from paper-and-pencil to computer-based testing (CBT). Many decisions must be made in order to provide an optimal…
Descriptors: Testing, Models, Testing Programs, Program Administration
Peer reviewed Peer reviewed
Direct linkDirect link
French, Brian F.; Finch, W. Holmes – Journal of Educational Measurement, 2010
The purpose of this study was to examine the performance of differential item functioning (DIF) assessment in the presence of a multilevel structure that often underlies data from large-scale testing programs. Analyses were conducted using logistic regression (LR), a popular, flexible, and effective tool for DIF detection. Data were simulated…
Descriptors: Test Bias, Testing Programs, Evaluation, Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
van der Linden, Wim J. – Journal of Educational Measurement, 2010
Although response times on test items are recorded on a natural scale, the scale for some of the parameters in the lognormal response-time model (van der Linden, 2006) is not fixed. As a result, when the model is used to periodically calibrate new items in a testing program, the parameter are not automatically mapped onto a common scale. Several…
Descriptors: Test Items, Testing Programs, Measures (Individuals), Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Ying; Jiao, Hong; Lissitz, Robert W. – Journal of Applied Testing Technology, 2012
This study investigated the application of multidimensional item response theory (IRT) models to validate test structure and dimensionality. Multiple content areas or domains within a single subject often exist in large-scale achievement tests. Such areas or domains may cause multidimensionality or local item dependence, which both violate the…
Descriptors: Achievement Tests, Science Tests, Item Response Theory, Measures (Individuals)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Zhang, Mo; Breyer, F. Jay; Lorenz, Florian – ETS Research Report Series, 2013
In this research, we investigated the suitability of implementing "e-rater"® automated essay scoring in a high-stakes large-scale English language testing program. We examined the effectiveness of generic scoring and 2 variants of prompt-based scoring approaches. Effectiveness was evaluated on a number of dimensions, including agreement…
Descriptors: Computer Assisted Testing, Computer Software, Scoring, Language Tests
Previous Page | Next Page »
Pages: 1  |  2  |  3