NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 466 to 480 of 7,091 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Valizadeh, Mohammadreza – Turkish Online Journal of Distance Education, 2022
This study aimed at highlighting the Turkish higher education learners' perceptions of cheating on online learning programs, the ways of, causes for, and some suggestions to minimize cheating. Both quantitative and qualitative data were gathered from 163 online learners via a questionnaire including both open-ended and close-ended questions. Data…
Descriptors: Foreign Countries, Online Courses, Cheating, Student Attitudes
Peer reviewed Peer reviewed
Direct linkDirect link
Barry, Carol L.; Jones, Andrew T.; Ibáñez, Beatriz; Grambau, Marni; Buyske, Jo – Educational Measurement: Issues and Practice, 2022
In response to the COVID-19 pandemic, the American Board of Surgery (ABS) shifted from in-person to remote administrations of the oral certifying exam (CE). Although the overall exam architecture remains the same, there are a number of differences in administration and staffing costs, exam content, security concerns, and the tools used to give the…
Descriptors: COVID-19, Pandemics, Computer Assisted Testing, Verbal Tests
Hample, Rachel – ProQuest LLC, 2022
Many institutions use placement tests as a method to assess students' readiness for college-level coursework. With the increased use of technology in testing, many institutions have transitioned placement test administration to an online format in an unproctored setting. While unproctored placement tests may provide financial and logistical…
Descriptors: Supervision, Mathematics Tests, Placement Tests, Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Moravec, Lukáš; Jecmínek, Jakub; Kukalová, Gabriela – Journal on Efficiency and Responsibility in Education and Science, 2022
The COVID-19 pandemic outbreak has upended the educational system worldwide, possibly with severe long-term consequences as most training institutions were forced to move to an online environment. Given the sudden transition to remote education, the main objective of this contribution is to evaluate the impact of distance education on examination…
Descriptors: Tests, Scores, Foreign Countries, College Students
Peer reviewed Peer reviewed
Direct linkDirect link
Duncan, Alex; Joyner, David – Journal of Computer Assisted Learning, 2022
Background: It is important for institutions of higher education to maintain academic integrity, both for students and the institutions themselves. Proctoring is one way of accomplishing this, and with the increasing popularity of online courses--along with the sudden shift to online education sparked by the COVID-19 pandemic--digital proctoring…
Descriptors: Computer Assisted Testing, Supervision, Integrity, COVID-19
Peer reviewed Peer reviewed
Direct linkDirect link
Cheng, Ying; Shao, Can – Educational and Psychological Measurement, 2022
Computer-based and web-based testing have become increasingly popular in recent years. Their popularity has dramatically expanded the availability of response time data. Compared to the conventional item response data that are often dichotomous or polytomous, response time has the advantage of being continuous and can be collected in an…
Descriptors: Reaction Time, Test Wiseness, Computer Assisted Testing, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Spino, LeAnne L.; Echevarría, Megan M.; Wu, Yu – Foreign Language Annals, 2022
The ACTFL Oral Proficiency Interview--computer (OPIc) employs a self-assessment instrument to determine the nature of the speaking prompts to which the test taker will respond and, thus the difficulty of the test. Grounded in research demonstrating varying levels of accuracy in self-assessment among language learners, this study examines the…
Descriptors: Computer Assisted Testing, Oral Language, Language Proficiency, Self Evaluation (Individuals)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Parte, Laura; Mellado, Lucía – Online Learning, 2022
This study sheds light on the relation between assessment modalities and student behavior through linguistics styles, and academic performance. First, we examine the effect of assessment modalities (self-evaluation quizzes and summative quizzes) on academic performance. Using two modalities of online quizzes, we mainly focus on the student…
Descriptors: Academic Achievement, Distance Education, Tests, Student Attitudes
Peer reviewed Peer reviewed
Direct linkDirect link
Emerson, David J.; Smith, Kenneth J. – Accounting Education, 2022
The recent pandemic necessitated a migration to online instruction leading to concerns regarding the integrity of online assessments as a result of the presence of fee-based websites that disseminate answers to students. We validated this concern by evaluating student performance on an online quiz where some of the questions had easily searchable…
Descriptors: Homework, Web Sites, Educational Technology, Cheating
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Erdem-Kara, Basak; Dogan, Nuri – International Journal of Assessment Tools in Education, 2022
Recently, adaptive test approaches have become a viable alternative to traditional fixed-item tests. The main advantage of adaptive tests is that they reach desired measurement precision with fewer items. However, fewer items mean that each item has a more significant effect on ability estimation and therefore those tests are open to more…
Descriptors: Item Analysis, Computer Assisted Testing, Test Items, Test Construction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sahin Kursad, Merve; Cokluk Bokeoglu, Omay; Cikrikci, Rahime Nukhet – International Journal of Assessment Tools in Education, 2022
Item parameter drift (IPD) is the systematic differentiation of parameter values of items over time due to various reasons. If it occurs in computer adaptive tests (CAT), it causes errors in the estimation of item and ability parameters. Identification of the underlying conditions of this situation in CAT is important for estimating item and…
Descriptors: Item Analysis, Computer Assisted Testing, Test Items, Error of Measurement
A. Corinne Huggins-Manley; Brandon M. Booth; Sidney K. D'Mello – Grantee Submission, 2022
The field of educational measurement places validity and fairness as central concepts of assessment quality (AERA, APA, NCME, 2014). Prior research has proposed embedding fairness arguments within argument-based validity processes, particularly when fairness is conceived as comparability in assessment properties across groups (Chapelle, 2021; Xi,…
Descriptors: Educational Assessment, Persuasive Discourse, Validity, Artificial Intelligence
Ben Seipel; Patrick C. Kennedy; Sarah E. Carlson; Virginia Clinton-Lisell; Mark L. Davison – Grantee Submission, 2022
As access to higher education increases, it is important to monitor students with special needs to facilitate the provision of appropriate resources and support. Although metrics such as ACT's (formerly American College Testing) "reading readiness" provide insight into how many students may need such resources, they do not specify…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Reading Tests, Reading Comprehension
Peer reviewed Peer reviewed
Direct linkDirect link
A. Corinne Huggins-Manley; Brandon M. Booth; Sidney K. D'Mello – Journal of Educational Measurement, 2022
The field of educational measurement places validity and fairness as central concepts of assessment quality. Prior research has proposed embedding fairness arguments within argument-based validity processes, particularly when fairness is conceived as comparability in assessment properties across groups. However, we argue that a more flexible…
Descriptors: Educational Assessment, Persuasive Discourse, Validity, Artificial Intelligence
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Carol Eckerly; Yue Jia; Paul Jewsbury – ETS Research Report Series, 2022
Testing programs have explored the use of technology-enhanced items alongside traditional item types (e.g., multiple-choice and constructed-response items) as measurement evidence of latent constructs modeled with item response theory (IRT). In this report, we discuss considerations in applying IRT models to a particular type of adaptive testlet…
Descriptors: Computer Assisted Testing, Test Items, Item Response Theory, Scoring
Pages: 1  |  ...  |  28  |  29  |  30  |  31  |  32  |  33  |  34  |  35  |  36  |  ...  |  473