Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 18 |
Since 2006 (last 20 years) | 36 |
Descriptor
Statistical Analysis | 51 |
Test Format | 51 |
Comparative Analysis | 25 |
Computer Assisted Testing | 23 |
Test Items | 21 |
Scores | 20 |
Foreign Countries | 12 |
Testing | 12 |
Correlation | 11 |
Multiple Choice Tests | 11 |
Item Response Theory | 9 |
More ▼ |
Source
Author
Adair, Desmond | 1 |
Adler, Rachel | 1 |
Aizawa, Kazumi | 1 |
Algina, James | 1 |
Ali, Usama S. | 1 |
Anakwe, Bridget | 1 |
Anderson, Daniel | 1 |
Angoff, William H. | 1 |
Backes, Ben | 1 |
Bailey, Kathleen M., Ed. | 1 |
Bande, Rhodora A. | 1 |
More ▼ |
Publication Type
Education Level
Audience
Practitioners | 1 |
Researchers | 1 |
Teachers | 1 |
Location
Australia | 2 |
Pennsylvania | 2 |
China | 1 |
Czech Republic | 1 |
Florida | 1 |
Iran | 1 |
Japan | 1 |
Luxembourg | 1 |
Maryland | 1 |
Massachusetts | 1 |
Minnesota | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Inga Laukaityte; Marie Wiberg – Practical Assessment, Research & Evaluation, 2024
The overall aim was to examine effects of differences in group ability and features of the anchor test form on equating bias and the standard error of equating (SEE) using both real and simulated data. Chained kernel equating, Postratification kernel equating, and Circle-arc equating were studied. A college admissions test with four different…
Descriptors: Ability Grouping, Test Items, College Entrance Examinations, High Stakes Tests
Matošková, Jana; Kovárík, Martin – Journal of Psychoeducational Assessment, 2017
It has been suggested that tacit knowledge may be a good predictor of performance in college. The purpose of this study was to investigate the extent to which a situational judgment test developed to measure tacit knowledge correlates with predictors and indicators of college performance. This situational judgment test includes eight situations…
Descriptors: Academic Achievement, Undergraduate Students, Predictor Variables, Situational Tests
Öz, Hüseyin; Özturan, Tuba – Journal of Language and Linguistic Studies, 2018
This article reports the findings of a study that sought to investigate whether computer-based vs. paper-based test-delivery mode has an impact on the reliability and validity of an achievement test for a pedagogical content knowledge course in an English teacher education program. A total of 97 university students enrolled in the English as a…
Descriptors: Computer Assisted Testing, Testing, Test Format, Teaching Methods
Wang, Lu; Steedle, Jeffrey – ACT, Inc., 2020
In recent ACT mode comparability studies, students testing on laptop or desktop computers earned slightly higher scores on average than students who tested on paper, especially on the ACT® reading and English tests (Li et al., 2017). Equating procedures adjust for such "mode effects" to make ACT scores comparable regardless of testing…
Descriptors: Test Format, Reading Tests, Language Tests, English
Muniroglu, S.; Subak, E. – Journal of Education and Training Studies, 2018
The football referees perform many actions as jogging, running, sprinting, side steps and backward steps during a football match. Further, the football referees change match activities every 5-6 seconds. Many tests are being conducted to determine the physical levels and competences of football referees like 50 m running, 200 m running, 12 minutes…
Descriptors: Judges, Physical Education, Team Sports, Athletics
Ali, Usama S.; Chang, Hua-Hua – ETS Research Report Series, 2014
Adaptive testing is advantageous in that it provides more efficient ability estimates with fewer items than linear testing does. Item-driven adaptive pretesting may also offer similar advantages, and verification of such a hypothesis about item calibration was the main objective of this study. A suitability index (SI) was introduced to adaptively…
Descriptors: Adaptive Testing, Simulation, Pretests Posttests, Test Items
Hubbard, Joanna K.; Potts, Macy A.; Couch, Brian A. – CBE - Life Sciences Education, 2017
Assessments represent an important component of undergraduate courses because they affect how students interact with course content and gauge student achievement of course objectives. To make decisions on assessment design, instructors must understand the affordances and limitations of available question formats. Here, we use a crossover…
Descriptors: Test Format, Questioning Techniques, Undergraduate Students, Objective Tests
Taherbhai, Husein; Seo, Daeryong; Bowman, Trinell – British Educational Research Journal, 2012
Literature in the United States provides many examples of no difference in student achievement when measured against the mode of test administration i.e., paper-pencil and online versions of the test. However, most of these researches centre on "regular" students who do not require differential teaching methods or different evaluation…
Descriptors: Learning Disabilities, Statistical Analysis, Teaching Methods, Test Format
Backes, Ben; Cowan, James – National Center for Analysis of Longitudinal Data in Education Research (CALDER), 2018
Nearly two dozen states now administer online exams. These tests have real consequences: their results feed into accountability systems, which have been used for more than a decade to hold schools and districts accountable for their students' learning. We examine the rollout of computer-based testing in Massachusetts over 2 years to investigate…
Descriptors: Computer Assisted Testing, Academic Achievement, Standardized Tests, Achievement Tests
Bendulo, Hermabeth O.; Tibus, Erlinda D.; Bande, Rhodora A.; Oyzon, Voltaire Q.; Milla, Norberto E.; Macalinao, Myrna L. – International Journal of Evaluation and Research in Education, 2017
Testing or evaluation in an educational context is primarily used to measure or evaluate and authenticate the academic readiness, learning advancement, acquisition of skills, or instructional needs of learners. This study tried to determine whether the varied combinations of arrangements of options and letter cases in a Multiple-Choice Test (MCT)…
Descriptors: Test Format, Multiple Choice Tests, Test Construction, Eye Movements
Effects of Repeated Testing on Short- and Long-Term Memory Performance across Different Test Formats
Stenlund, Tova; Sundström, Anna; Jonsson, Bert – Educational Psychology, 2016
This study examined whether practice testing with short-answer (SA) items benefits learning over time compared to practice testing with multiple-choice (MC) items, and rereading the material. More specifically, the aim was to test the hypotheses of "retrieval effort" and "transfer appropriate processing" by comparing retention…
Descriptors: Short Term Memory, Long Term Memory, Test Format, Testing
Davison, Christopher B.; Dustova, Gandzhina – Journal of Instructional Pedagogies, 2017
This research study describes the correlations between student performance and examination format in a higher education teaching and research institution. The researchers employed a quantitative, correlational methodology utilizing linear regression analysis. The data was obtained from undergraduate student test scores over a three-year time span.…
Descriptors: Statistical Analysis, Performance Based Assessment, Correlation, Higher Education
Khoshsima, Hooshang; Hosseini, Monirosadat; Toroujeni, Seyyed Morteza Hashemi – English Language Teaching, 2017
Advent of technology has caused growing interest in using computers to convert conventional paper and pencil-based testing (Henceforth PPT) into Computer-based testing (Henceforth CBT) in the field of education during last decades. This constant promulgation of computers to reshape the conventional tests into computerized format permeated the…
Descriptors: English (Second Language), Second Language Learning, Second Language Instruction, Correlation
Hardcastle, Joseph; Herrmann-Abell, Cari F.; DeBoer, George E. – Grantee Submission, 2017
Can student performance on computer-based tests (CBT) and paper-and-pencil tests (PPT) be considered equivalent measures of student knowledge? States and school districts are grappling with this question, and although studies addressing this question are growing, additional research is needed. We report on the performance of students who took…
Descriptors: Academic Achievement, Computer Assisted Testing, Comparative Analysis, Student Evaluation
Li, Dongmei; Yi, Qing; Harris, Deborah – ACT, Inc., 2017
In preparation for online administration of the ACT® test, ACT conducted studies to examine the comparability of scores between online and paper administrations, including a timing study in fall 2013, a mode comparability study in spring 2014, and a second mode comparability study in spring 2015. This report presents major findings from these…
Descriptors: College Entrance Examinations, Computer Assisted Testing, Comparative Analysis, Test Format