NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 21 results Save | Export
Bronson Hui – ProQuest LLC, 2021
Vocabulary researchers have started expanding their assessment toolbox by incorporating timed tasks and psycholinguistic instruments (e.g., priming tasks) to gain insights into lexical development (e.g., Elgort, 2011; Godfroid, 2020b; Nakata & Elgort, 2020; Vandenberghe et al., 2021). These timed sensitive and implicit word measures differ…
Descriptors: Measures (Individuals), Construct Validity, Decision Making, Vocabulary Development
Steedle, Jeffrey; Pashley, Peter; Cho, YoungWoo – ACT, Inc., 2020
Three mode comparability studies were conducted on the following Saturday national ACT test dates: October 26, 2019, December 14, 2019, and February 8, 2020. The primary goal of these studies was to evaluate whether ACT scores exhibited mode effects between paper and online testing that would necessitate statistical adjustments to the online…
Descriptors: Test Format, Computer Assisted Testing, College Entrance Examinations, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Yan; Kim, Eun Sook; Dedrick, Robert F.; Ferron, John M.; Tan, Tony – Educational and Psychological Measurement, 2018
Wording effects associated with positively and negatively worded items have been found in many scales. Such effects may threaten construct validity and introduce systematic bias in the interpretation of results. A variety of models have been applied to address wording effects, such as the correlated uniqueness model and the correlated traits and…
Descriptors: Test Items, Test Format, Correlation, Construct Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Scott, Terry F.; Schumayer, Dániel – Physical Review Physics Education Research, 2017
The Force Concept Inventory is one of the most popular and most analyzed multiple-choice concept tests used to investigate students' understanding of Newtonian mechanics. The correct answers poll a set of underlying Newtonian concepts and the coherence of these underlying concepts has been found in the data. However, this inventory was constructed…
Descriptors: World Views, Scientific Concepts, Scientific Principles, Multiple Choice Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sheybani, Elias; Zeraatpishe, Mitra – International Journal of Language Testing, 2018
Test method is deemed to affect test scores along with examinee ability (Bachman, 1996). In this research the role of method facet in reading comprehension tests is studied. Bachman divided method facet into five categories, one category is the nature of input and the nature of expected response. This study examined the role of method effect in…
Descriptors: Reading Comprehension, Reading Tests, Test Items, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Culligan, Brent – Language Testing, 2015
This study compared three common vocabulary test formats, the Yes/No test, the Vocabulary Knowledge Scale (VKS), and the Vocabulary Levels Test (VLT), as measures of vocabulary difficulty. Vocabulary difficulty was defined as the item difficulty estimated through Item Response Theory (IRT) analysis. Three tests were given to 165 Japanese students,…
Descriptors: Language Tests, Test Format, Comparative Analysis, Vocabulary
Peer reviewed Peer reviewed
Direct linkDirect link
Ihme, Jan Marten; Senkbeil, Martin; Goldhammer, Frank; Gerick, Julia – European Educational Research Journal, 2017
The combination of different item formats is found quite often in large scale assessments, and analyses on the dimensionality often indicate multi-dimensionality of tests regarding the task format. In ICILS 2013, three different item types (information-based response tasks, simulation tasks, and authoring tasks) were used to measure computer and…
Descriptors: Foreign Countries, Computer Literacy, Information Literacy, International Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Keller, Lisa A.; Keller, Robert R. – Applied Measurement in Education, 2015
Equating test forms is an essential activity in standardized testing, with increased importance with the accountability systems in existence through the mandate of Adequate Yearly Progress. It is through equating that scores from different test forms become comparable, which allows for the tracking of changes in the performance of students from…
Descriptors: Item Response Theory, Rating Scales, Standardized Tests, Scoring Rubrics
Peer reviewed Peer reviewed
Direct linkDirect link
Zhang, Xijuan; Savalei, Victoria – Educational and Psychological Measurement, 2016
Many psychological scales written in the Likert format include reverse worded (RW) items in order to control acquiescence bias. However, studies have shown that RW items often contaminate the factor structure of the scale by creating one or more method factors. The present study examines an alternative scale format, called the Expanded format,…
Descriptors: Factor Structure, Psychological Testing, Alternative Assessment, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Wan, Lei; Henly, George A. – Applied Measurement in Education, 2012
Many innovative item formats have been proposed over the past decade, but little empirical research has been conducted on their measurement properties. This study examines the reliability, efficiency, and construct validity of two innovative item formats--the figural response (FR) and constructed response (CR) formats used in a K-12 computerized…
Descriptors: Test Items, Test Format, Computer Assisted Testing, Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Scarpati, Stanley E.; Wells, Craig S.; Lewis, Christine; Jirka, Stephen – Journal of Special Education, 2011
The purpose of this study was to use differential item functioning (DIF) and latent mixture model analyses to explore factors that explain performance differences on a large-scale mathematics assessment between examinees allowed to use a calculator or who were afforded item presentation accommodations versus those who did not receive the same…
Descriptors: Testing Accommodations, Test Items, Test Format, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Romhild, Anja; Kenyon, Dorry; MacGregor, David – Language Assessment Quarterly, 2011
This study examined the role of domain-general and domain-specific linguistic knowledge in the assessment of academic English language proficiency using a latent variable modeling approach. The goal of the study was to examine if modeling of domain-specific variance results in improved model fit and well-defined latent factors. Analyses were…
Descriptors: Concept Formation, English (Second Language), Language Proficiency, Second Language Learning
Peer reviewed Peer reviewed
Comrey, Andrew L. – Journal of Consulting and Clinical Psychology, 1988
Addresses common pitfalls in homogeneous scale construction in clinical and social psychology. Offers suggestions about item writing, answer scale formats, data analysis procedures, and overall scale development strategy. Emphasizes effective use of factor-analytic methods to select items for scales and to determine its proper location in…
Descriptors: Clinical Psychology, Data Analysis, Factor Analysis, Personality Measures
De Champlain, Andre; Gessaroli, Marc E. – 1991
A new index for assessing the dimensionality underlying a set of test items was investigated. The incremental fit index (IFI) is based on the sum of squares of the residual covariances. Purposes of the study were to: (1) examine the distribution of the IFI in the null situation, with truly unidimensional data; (2) examine the rejection rate of the…
Descriptors: Equations (Mathematics), Factor Analysis, Foreign Countries, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Osterlind, Steven J.; Miao, Danmin; Sheng, Yanyan; Chia, Rosina C. – International Journal of Testing, 2004
This study investigated the interaction between different cultural groups and item type, and the ensuing effect on construct validity for a psychological inventory, the Myers-Briggs Type Indicator (MBTI, Form G). The authors analyzed 94 items from 2 Chinese-translated versions of the MBTI (Form G) for factorial differences among groups of…
Descriptors: Test Format, Undergraduate Students, Cultural Differences, Test Validity
Previous Page | Next Page »
Pages: 1  |  2