NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
No Child Left Behind Act 20011
What Works Clearinghouse Rating
Showing all 15 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Esteban Guevara Hidalgo – International Journal for Educational Integrity, 2025
The COVID-19 pandemic had a profound impact on education, forcing many teachers and students who were not used to online education to adapt to an unanticipated reality by improvising new teaching and learning methods. Within the realm of virtual education, the evaluation methods underwent a transformation, with some assessments shifting towards…
Descriptors: Foreign Countries, Higher Education, COVID-19, Pandemics
Peer reviewed Peer reviewed
Direct linkDirect link
Blaženka Divjak; Petra Žugec; Katarina Pažur Anicic – International Journal of Mathematical Education in Science and Technology, 2024
Assessment is among the inevitable components of a curriculum and directs students' learning. E-assessment, as prepared and administered with the use of ICT, provides opportunities to make the process easier in some aspects, but also brings certain challenges. This paper presents an e-assessment framework from a student perspective. Our study…
Descriptors: Student Evaluation, Computer Assisted Testing, Evaluation Methods, Student Attitudes
Patrick C. Kyllonen; Amit Sevak; Teresa Ober; Ikkyu Choi; Jesse Sparks; Daniel Fishtein – ETS Research Institute, 2024
Assessment refers to a broad array of approaches for measuring or evaluating a person's (or group of persons') skills, behaviors, dispositions, or other attributes. Assessments range from standardized tests used in admissions, employee selection, licensure examinations, and domestic and international largescale assessments of cognitive and…
Descriptors: Performance Based Assessment, Evaluation Criteria, Evaluation Methods, Test Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Dadey, Nathan; Lyons, Susan; DePascale, Charles – Applied Measurement in Education, 2018
Evidence of comparability is generally needed whenever there are variations in the conditions of an assessment administration, including variations introduced by the administration of an assessment on multiple digital devices (e.g., tablet, laptop, desktop). This article is meant to provide a comprehensive examination of issues relevant to the…
Descriptors: Evaluation Methods, Computer Assisted Testing, Educational Technology, Technology Uses in Education
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Colwell, Nicole Makas – Journal of Education and Training Studies, 2013
This paper highlights the current findings and issues regarding the role of computer-adaptive testing in test anxiety. The computer-adaptive test (CAT) proposed by one of the Common Core consortia brings these issues to the forefront. Research has long indicated that test anxiety impairs student performance. More recent research indicates that…
Descriptors: Test Anxiety, Computer Assisted Testing, Evaluation Methods, Standardized Tests
Stone, Elizabeth; Davey, Tim – Educational Testing Service, 2011
There has been an increased interest in developing computer-adaptive testing (CAT) and multistage assessments for K-12 accountability assessments. The move to adaptive testing has been met with some resistance by those in the field of special education who express concern about routing of students with divergent profiles (e.g., some students with…
Descriptors: Disabilities, Adaptive Testing, Accountability, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Garb, Howard N. – Psychological Assessment, 2007
To evaluate the value of computer-administered interviews and rating scales, the following topics are reviewed in the present article: (a) strengths and weaknesses of structured and unstructured assessment instruments, (b) advantages and disadvantages of computer administration, and (c) the validity and utility of computer-administered interviews…
Descriptors: Computer Assisted Testing, Rating Scales, Interviews, Evaluation Methods
Puhan, Gautam; Boughton, Keith; Kim, Sooyeon – Journal of Technology, Learning, and Assessment, 2007
The study evaluated the comparability of two versions of a certification test: a paper-and-pencil test (PPT) and computer-based test (CBT). An effect size measure known as Cohen's d and differential item functioning (DIF) analyses were used as measures of comparability at the test and item levels, respectively. Results indicated that the effect…
Descriptors: Computer Assisted Testing, Effect Size, Test Bias, Mathematics Tests
Shermis, Mark D.; DiVesta, Francis J. – Rowman & Littlefield Publishers, Inc., 2011
"Classroom Assessment in Action" clarifies the multi-faceted roles of measurement and assessment and their applications in a classroom setting. Comprehensive in scope, Shermis and Di Vesta explain basic measurement concepts and show students how to interpret the results of standardized tests. From these basic concepts, the authors then…
Descriptors: Student Evaluation, Standardized Tests, Scores, Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Marks, Anthony M.; Cronje, Johannes C. – Educational Technology & Society, 2008
Computer-based assessments are becoming more commonplace, perhaps as a necessity for faculty to cope with large class sizes. These tests often occur in large computer testing venues in which test security may be compromised. In an attempt to limit the likelihood of cheating in such venues, randomised presentation of items is automatically…
Descriptors: Educational Assessment, Educational Testing, Research Needs, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Lei, Pui-Wa; Chen, Shu-Ying; Yu, Lan – Journal of Educational Measurement, 2006
Mantel-Haenszel and SIBTEST, which have known difficulty in detecting non-unidirectional differential item functioning (DIF), have been adapted with some success for computerized adaptive testing (CAT). This study adapts logistic regression (LR) and the item-response-theory-likelihood-ratio test (IRT-LRT), capable of detecting both unidirectional…
Descriptors: Evaluation Methods, Test Bias, Computer Assisted Testing, Multiple Regression Analysis
Secolsky, Charles, Ed.; Denison, D. Brian, Ed. – Routledge, Taylor & Francis Group, 2011
Increased demands for colleges and universities to engage in outcomes assessment for accountability purposes have accelerated the need to bridge the gap between higher education practice and the fields of measurement, assessment, and evaluation. The "Handbook on Measurement, Assessment, and Evaluation in Higher Education" provides higher…
Descriptors: Generalizability Theory, Higher Education, Institutional Advancement, Teacher Effectiveness
Peer reviewed Peer reviewed
Direct linkDirect link
Stricker, Lawrence J.; Wilder, Gita Z.; Bridgeman, Brent – International Journal of Testing, 2006
The aim of this study was to assess test takers' attitudes and beliefs about an admissions test used extensively in graduate schools of business in the United States, the Graduate Management Admission Test (GMAT), and the relationships of these attitudes and beliefs to test performance. A set of attitude and belief items was administered by…
Descriptors: Computer Assisted Testing, Test Wiseness, Gender Differences, Ethnic Groups
Peer reviewed Peer reviewed
Fulcher, Glenn – ELT Journal, 1999
Considers the computerization of an English-language placement test for delivery on the World Wide Web. Describes a pilot study to investigate potential bias against students who lack computer familiarity or have negative attitudes towards technology, and assesses the usefulness of the test as a placement instrument by comparing the accuracy of…
Descriptors: Comparative Analysis, Computer Assisted Testing, Computer Literacy, English (Second Language)
Rizavi, Saba; Way, Walter D.; Lu, Ying; Pitoniak, Mary; Steffen, Manfred – Online Submission, 2004
The purpose of this study was to use realistically simulated data to evaluate various CAT designs for use with the verbal reasoning measure of the Medical College Admissions Test (MCAT). Factors such as item pool depth, content constraints, and item formats often cause repeated adaptive administrations of an item at ability levels that are not…
Descriptors: Test Items, Test Bias, Item Banks, College Admission