NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 17 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Mahdi Ghorbankhani; Keyvan Salehi – SAGE Open, 2025
Academic procrastination, the tendency to delay academic tasks without reasonable justification, has significant implications for students' academic performance and overall well-being. To measure this construct, numerous scales have been developed, among which the Academic Procrastination Scale (APS) has shown promise in assessing academic…
Descriptors: Psychometrics, Measures (Individuals), Time Management, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Farshad Effatpanah; Purya Baghaei; Mona Tabatabaee-Yazdi; Esmat Babaii – Language Testing, 2025
This study aimed to propose a new method for scoring C-Tests as measures of general language proficiency. In this approach, the unit of analysis is sentences rather than gaps or passages. That is, the gaps correctly reformulated in each sentence were aggregated as sentence score, and then each sentence was entered into the analysis as a polytomous…
Descriptors: Item Response Theory, Language Tests, Test Items, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Mehri Izadi; Maliheh Izadi; Farrokhlagha Heidari – Education and Information Technologies, 2024
In today's environment of growing class sizes due to the prevalence of online and e-learning systems, providing one-to-one instruction and feedback has become a challenging task for teachers. Anyhow, the dialectical integration of instruction and assessment into a seamless and dynamic activity can provide a continuous flow of assessment…
Descriptors: Adaptive Testing, Computer Assisted Testing, English (Second Language), Second Language Learning
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Afsharrad, Mohammad; Pishghadam, Reza; Baghaei, Purya – International Journal of Language Testing, 2023
Testing organizations are faced with increasing demand to provide subscores in addition to the total test score. However, psychometricians argue that most subscores do not have added value to be worth reporting. To have added value, subscores need to meet a number of criteria: they should be reliable, distinctive, and distinct from each other and…
Descriptors: Comparative Analysis, Scores, Value Added Models, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Jamalzadeh, Mehri; Lotfi, Ahmad Reza; Rostami, Masoud – Language Testing in Asia, 2021
The current study sought to examine the validity of a General English Achievement Test (GEAT), administered to university students in the fall semester of 2018-2019 academic year, by hybridizing differential information (DIF) and differential distractor function (DDF) analytical models. Using a purposive sampling method, from the target population…
Descriptors: Language Tests, Achievement Tests, Undergraduate Students, Islam
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Amirsheibani, Morteza; Ghazanfari, Mohammad; Pishghadam, Reza – MEXTESOL Journal, 2020
Grice's conversational maxims have been one of the most influential pragmatic theories up to now. The primary purpose of this study was to measure the comprehension of Iranian intermediate EFL learners in terms of English humor based on Grice's non-observed conversational maxims. Moreover, this study intended to find which of Grice's non-observed…
Descriptors: Humor, Linguistic Theory, Scores, Pragmatics
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, HyeSun – Applied Measurement in Education, 2018
The current simulation study examined the effects of Item Parameter Drift (IPD) occurring in a short scale on parameter estimates in multilevel models where scores from a scale were employed as a time-varying predictor to account for outcome scores. Five factors, including three decisions about IPD, were considered for simulation conditions. It…
Descriptors: Test Items, Hierarchical Linear Modeling, Predictor Variables, Scores
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Khaksefidi, Saman – International Education Studies, 2017
This study investigates the psychological effect of a wrong question with wrong items on answering to the next question in a test of structure. Forty students selected through stratified random sampling are given 15 questions of a standardized test namely a TOEFL structure test in which questions number 7 and number 11 are wrong and their answers…
Descriptors: Language Tests, English (Second Language), Second Language Learning, Statistical Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sheybani, Elias; Zeraatpishe, Mitra – International Journal of Language Testing, 2018
Test method is deemed to affect test scores along with examinee ability (Bachman, 1996). In this research the role of method facet in reading comprehension tests is studied. Bachman divided method facet into five categories, one category is the nature of input and the nature of expected response. This study examined the role of method effect in…
Descriptors: Reading Comprehension, Reading Tests, Test Items, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Baghaei, Purya; Ravand, Hamdollah – SAGE Open, 2019
In many reading comprehension tests, different test formats are employed. Two commonly used test formats to measure reading comprehension are sustained passages followed by some questions and cloze items. Individual differences in handling test format peculiarities could constitute a source of score variance. In this study, a bifactor Rasch model…
Descriptors: Cloze Procedure, Test Bias, Individual Differences, Difficulty Level
Peer reviewed Peer reviewed
Direct linkDirect link
Baghaei, Purya; Aryadoust, Vahid – International Journal of Testing, 2015
Research shows that test method can exert a significant impact on test takers' performance and thereby contaminate test scores. We argue that common test method can exert the same effect as common stimuli and violate the conditional independence assumption of item response theory models because, in general, subsets of items which have a shared…
Descriptors: Test Format, Item Response Theory, Models, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Baghaei, Purya; Dourakhshan, Alireza – International Journal of Language Testing, 2016
The purpose of the present study is to compare the psychometric qualities of canonical single-response multiple-choice items with their double-response counterparts. Thirty, two-response fouroption grammar items for undergraduate students of English were constructed. A second version of the test was constructed by replacing one of the correct…
Descriptors: Language Tests, Multiple Choice Tests, Test Items, Factor Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ahmadi, Seyyed Rasool Mirghasempour – Anatolian Journal of Education, 2016
Through the introduction of different dimensions of vocabulary knowledge, depths and breadth dimensions, various studies attempted to examine numerous effective factors on these dimensions. The present study aimed to show the effects of different vocabulary learning styles through extensive and intensive reading programs on depth and breadth…
Descriptors: Incidental Learning, Vocabulary Development, Second Language Learning, Cognitive Style
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ebadi, Saman; Saeedian, Abdulbaset – Teaching English with Technology, 2016
Dynamic Assessment (DA) is a postmodern notion in testing which sees instruction and assessment as inextricably mingled contending that learners will progress if provided with dynamic interactions. The main purpose of the study is to see if the scores generated by the computerized dynamic assessment (C-DA) which is grounded in Vygotsky's…
Descriptors: Instructional Design, Second Language Learning, Second Language Instruction, Postmodernism
Peer reviewed Peer reviewed
Direct linkDirect link
Karami, Hossein – Asia Pacific Education Review, 2013
There has been a growing consensus among the educational measurement experts and psychometricians that test taker characteristics may unduly affect the performance on tests. This may lead to construct-irrelevant variance in the scores and thus render the test biased. Hence, it is incumbent on test developers and users alike to provide evidence…
Descriptors: Foreign Countries, Gender Differences, High Stakes Tests, Language Proficiency
Previous Page | Next Page ยป
Pages: 1  |  2