NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1,951 to 1,965 of 9,530 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Lehane, Paula; Scully, Darina; O'Leary, Michael – Irish Educational Studies, 2022
In line with the widespread proliferation of digital technology in everyday life, many countries are now beginning to use computer-based exams (CBEs) in their post-primary education systems. To ensure that these CBEs are delivered in a manner that preserves their fairness, validity, utility and credibility, several factors pertaining to their…
Descriptors: Computer Assisted Testing, Secondary School Students, Culture Fair Tests, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Doyle, Elaine; Buckley, Patrick – Interactive Learning Environments, 2022
While research and practice centred around students and academics working together to co-create in the higher level sector has increased, co-creation in assessment remains relatively rare in a higher education context. It is acknowledged in the literature that deeper comprehension of content can be realised when students author their own questions…
Descriptors: Multiple Choice Tests, Student Participation, Test Construction, Academic Achievement
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Coniam, David; Lee, Tony; Milanovic, Michael; Pike, Nigel; Zhao, Wen – Language Education & Assessment, 2022
The calibration of test materials generally involves the interaction between empirical analysis and expert judgement. This paper explores the extent to which scale familiarity might affect expert judgement as a component of test validation in the calibration process. It forms part of a larger study that investigates the alignment of the…
Descriptors: Specialists, Language Tests, Test Validity, College Faculty
Peer reviewed Peer reviewed
Direct linkDirect link
Masrai, Ahmed – SAGE Open, 2022
Vocabulary size measures serve important functions, not only with respect to placing learners at appropriate levels on language courses but also with a view to examining the progress of learners. One of the widely reported formats suitable for these purposes is the Yes/No vocabulary test. The primary aim of this study was to introduce and provide…
Descriptors: Vocabulary Development, Language Tests, English (Second Language), Second Language Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Yunjiu, Luo; Wei, Wei; Zheng, Ying – SAGE Open, 2022
Artificial intelligence (AI) technologies have the potential to reduce the workload for the second language (L2) teachers and test developers. We propose two AI distractor-generating methods for creating Chinese vocabulary items: semantic similarity and visual similarity. Semantic similarity refers to antonyms and synonyms, while visual similarity…
Descriptors: Chinese, Vocabulary Development, Artificial Intelligence, Undergraduate Students
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Ahyoung Alicia; Tywoniw, Rurik L.; Chapman, Mark – Language Assessment Quarterly, 2022
Technology-enhanced items (TEIs) are innovative, computer-delivered test items that allow test takers to better interact with the test environment compared to traditional multiple-choice items (MCIs). The interactive nature of TEIs offer improved construct coverage compared with MCIs but little research exists regarding students' performance on…
Descriptors: Language Tests, Test Items, Computer Assisted Testing, English (Second Language)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Whyte, Shona; Edmonds, Amanda; Palasis, Katerina; Gerbier, Emilie – Research-publishing.net, 2022
Language researchers and teachers have long been interested in the timing of learning, and the distributed practice effect, whereby greater inter-session intervals result in longer retention, is well-known (Kim & Webb, 2022). Many L2 studies have focused on the intentional learning of lexis (Edmonds, Gerbier, Palasis, & Whyte, 2021),…
Descriptors: Second Language Learning, Second Language Instruction, Teaching Methods, Learning Processes
Nguyen, Tutrang; Malone, Lizabeth; Atkins-Burnett, Sally; Larson, Addison; Cannon, Judy – Administration for Children & Families, 2022
The Head Start Family and Child Experiences Survey (FACES) and the American Indian and Alaska Native Head Start Family and Child Experiences Survey (AIAN FACES) are separate studies done successively over time. One goal for these studies is to provide a national picture of children's readiness for school. In this research brief, the authors use…
Descriptors: Cognitive Measurement, Cognitive Ability, School Readiness, Low Income Students
Peer reviewed Peer reviewed
Direct linkDirect link
O'Grady, Stefan – Innovation in Language Learning and Teaching, 2023
Purpose: The current study applies an innovative approach to the assessment of second language listening comprehension skills. This is an important focus in need of innovation because scores generated through language assessment tasks should reflect variation in the target skill and the literature broadly suggests that conventional methods of…
Descriptors: Listening Comprehension, Second Language Learning, Correlation, English (Second Language)
Susan Rowe – ProQuest LLC, 2023
This dissertation explored whether unnecessary linguistic complexity (LC) in mathematics and biology assessment items changes the direction and significance of differential item functioning (DIF) between subgroups emergent bilinguals (EBs) and English proficient students (EPs). Due to inconsistencies in measuring LC in items, Study One adapted a…
Descriptors: Difficulty Level, English for Academic Purposes, Second Language Learning, Second Language Instruction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Khoshdel, Fahimeh – International Journal of Language Testing, 2017
In the current study, the validity of C-Test is investigated using the construct identification approach. Based on construct identification approach, the factors which are deemed to affect item difficulty in C-Test items were identified. To this aim, 11 factors were selected to enter into Linear Logistic Testing Model (LLTM) analysis to…
Descriptors: Cloze Procedure, Language Tests, Test Items, Difficulty Level
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Eleje, Lydia Ijeoma; Abanobi, Chidiebere Christopher; Obasi, Emma – Asian Journal of Education and Training, 2017
Economics achievement test (EAT) for assessing senior secondary two (SS2) achievement in economics was developed and validated in the study. Five research questions guided the study. Twenty and 100 mid-senior secondary (SS2) economics students was used for the pilot testing and reliability check respectively. A sample of 250 students randomly…
Descriptors: Secondary Schools, Achievement Tests, Pilot Projects, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Guo, Hongwen; Robin, Frederic; Dorans, Neil – Journal of Educational Measurement, 2017
The early detection of item drift is an important issue for frequently administered testing programs because items are reused over time. Unfortunately, operational data tend to be very sparse and do not lend themselves to frequent monitoring analyses, particularly for on-demand testing. Building on existing residual analyses, the authors propose…
Descriptors: Testing, Test Items, Identification, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Höhne, Jan Karem; Schlosser, Stephan; Krebs, Dagmar – Field Methods, 2017
Measuring attitudes and opinions employing agree/disagree (A/D) questions is a common method in social research because it appears to be possible to measure different constructs with identical response scales. However, theoretical considerations suggest that A/D questions require a considerable cognitive processing. Item-specific (IS) questions,…
Descriptors: Online Surveys, Test Format, Test Items, Difficulty Level
Peer reviewed Peer reviewed
Direct linkDirect link
Papenberg, Martin; Musch, Jochen – Applied Measurement in Education, 2017
In multiple-choice tests, the quality of distractors may be more important than their number. We therefore examined the joint influence of distractor quality and quantity on test functioning by providing a sample of 5,793 participants with five parallel test sets consisting of items that differed in the number and quality of distractors.…
Descriptors: Multiple Choice Tests, Test Items, Test Validity, Test Reliability
Pages: 1  |  ...  |  127  |  128  |  129  |  130  |  131  |  132  |  133  |  134  |  135  |  ...  |  636