NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)1
Since 2006 (last 20 years)13
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 34 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Keller, Lisa A.; Keller, Robert; Cook, Robert J.; Colvin, Kimberly F. – Applied Measurement in Education, 2016
The equating of tests is an essential process in high-stakes, large-scale testing conducted over multiple forms or administrations. By adjusting for differences in difficulty and placing scores from different administrations of a test on a common scale, equating allows scores from these different forms and administrations to be directly compared…
Descriptors: Item Response Theory, Equated Scores, Test Format, Testing Programs
Peer reviewed Peer reviewed
Direct linkDirect link
Brame, Cynthia J.; Biel, Rachel – CBE - Life Sciences Education, 2015
Testing within the science classroom is commonly used for both formative and summative assessment purposes to let the student and the instructor gauge progress toward learning goals. Research within cognitive science suggests, however, that testing can also be a learning event. We present summaries of studies that suggest that repeated retrieval…
Descriptors: Undergraduate Students, Testing Programs, Feedback (Response), Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Chung, Sun Joo; Haider, Iftikhar; Boyd, Ryan – Language Teaching, 2015
At the University of Illinois at Urbana-Champaign (UIUC), the English Placement Test (EPT) is the institutional placement test that is used to place students into appropriate English as a second language (ESL) writing and/or pronunciation service courses. The EPT is used to assess the English ability of newly admitted international undergraduate…
Descriptors: College English, Student Placement, English (Second Language), Foreign Students
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ghaderi, Marzieh; Mogholi, Marzieh; Soori, Afshin – International Journal of Education and Literacy Studies, 2014
Testing subject has many subsets and connections. One important issue is how to assess or measure students or learners. What would be our tools, what would be our style, what would be our goal and so on. So in this paper the author attended to the style of testing in school and other educational settings. Since the purposes of educational system…
Descriptors: Testing, Testing Programs, Intermode Differences, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Wen-Chung; Chen, Hui-Fang; Jin, Kuan-Yu – Educational and Psychological Measurement, 2015
Many scales contain both positively and negatively worded items. Reverse recoding of negatively worded items might not be enough for them to function as positively worded items do. In this study, we commented on the drawbacks of existing approaches to wording effect in mixed-format scales and used bi-factor item response theory (IRT) models to…
Descriptors: Item Response Theory, Test Format, Language Usage, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Wang, Lin; Qian, Jiahe; Lee, Yi-Hsuan – ETS Research Report Series, 2013
The purpose of this study was to evaluate the combined effects of reduced equating sample size and shortened anchor test length on item response theory (IRT)-based linking and equating results. Data from two independent operational forms of a large-scale testing program were used to establish the baseline results for evaluating the results from…
Descriptors: Test Construction, Item Response Theory, Testing Programs, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Albano, Anthony D. – Journal of Educational Measurement, 2013
In many testing programs it is assumed that the context or position in which an item is administered does not have a differential effect on examinee responses to the item. Violations of this assumption may bias item response theory estimates of item and person parameters. This study examines the potentially biasing effects of item position. A…
Descriptors: Test Items, Item Response Theory, Test Format, Questioning Techniques
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Becker, Kirk A.; Bergstrom, Betty A. – Practical Assessment, Research & Evaluation, 2013
The need for increased exam security, improved test formats, more flexible scheduling, better measurement, and more efficient administrative processes has caused testing agencies to consider converting the administration of their exams from paper-and-pencil to computer-based testing (CBT). Many decisions must be made in order to provide an optimal…
Descriptors: Testing, Models, Testing Programs, Program Administration
Peer reviewed Peer reviewed
Direct linkDirect link
Wyse, Adam E. – Applied Psychological Measurement, 2011
In many practical testing situations, alternate test forms from the same testing program are not strictly parallel to each other and instead the test forms exhibit small psychometric differences. This article investigates the potential practical impact that these small psychometric differences can have on expected classification accuracy. Ten…
Descriptors: Test Format, Test Construction, Testing Programs, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Dimacali, Allen M. – Journal of Mathematics Education at Teachers College, 2012
In conjunction with the adoption and subsequent implementation of the "Common Core State Standards for Mathematics" (CCSSM), state-led consortia are developing next-generation assessments aligned to the CCSSM. This paper discusses the progress and plans of two main coalitions of states--the Partnership for Assessment of Readiness for…
Descriptors: Common Core State Standards, Alternative Assessment, Test Construction, Testing Programs
Peer reviewed Peer reviewed
Direct linkDirect link
Filipi, Anna – Language Testing, 2012
The Assessment of Language Competence (ALC) certificates is an annual, international testing program developed by the Australian Council for Educational Research to test the listening and reading comprehension skills of lower to middle year levels of secondary school. The tests are developed for three levels in French, German, Italian and…
Descriptors: Listening Comprehension Tests, Item Response Theory, Statistical Analysis, Foreign Countries
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Higgins, Jennifer; Patterson, Margaret Becker; Bozman, Martha; Katz, Michael – Journal of Technology, Learning, and Assessment, 2010
This study examined the feasibility of administering GED Tests using a computer based testing system with embedded accessibility tools and the impact on test scores and test-taker experience when GED Tests are transitioned from paper to computer. Nineteen test centers across five states successfully installed the computer based testing program,…
Descriptors: Testing Programs, Testing, Computer Uses in Education, Mathematics Tests
Peer reviewed Peer reviewed
Direct linkDirect link
DeMars, Christine E. – Educational Assessment, 2007
A series of 8 tests was administered to university students over 4 weeks for program assessment purposes. The stakes of these tests were low for students; they received course points based on test completion, not test performance. Tests were administered in a counterbalanced order across 2 administrations. Response time effort, a measure of the…
Descriptors: Reaction Time, Guessing (Tests), Testing Programs, College Students
Peer reviewed Peer reviewed
Sykes, Robert C.; Yen, Wendy M. – Journal of Educational Measurement, 2000
Investigated how well the generalized and Rasch models described item and test performance across a broad range of mixed-item-format test configurations (six tests from two state proficiency testing programs). Evaluating the impact of model assumptions on the predictions of item and test information permitted a delineation of the implications of…
Descriptors: Achievement Tests, Elementary Secondary Education, Prediction, Scaling
Huntington, Fred – Executive Educator, 1985
Tips on improving student test scores fall into two categories: format training and environmental conditions. Environmental conditions include notifying parents of the test, emphasizing the importance of the test, and giving the test in familiar surroundings. Format training includes explaining multiple-choice options and answer sheets. (MLF)
Descriptors: Administrator Role, Elementary Secondary Education, Environmental Influences, Scores
Previous Page | Next Page ยป
Pages: 1  |  2  |  3