NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 64 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Monica Casella; Pasquale Dolce; Michela Ponticorvo; Nicola Milano; Davide Marocco – Educational and Psychological Measurement, 2024
Short-form development is an important topic in psychometric research, which requires researchers to face methodological choices at different steps. The statistical techniques traditionally used for shortening tests, which belong to the so-called exploratory model, make assumptions not always verified in psychological data. This article proposes a…
Descriptors: Artificial Intelligence, Test Construction, Test Format, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Cornelia Eva Neuert – Sociological Methods & Research, 2024
The quality of data in surveys is affected by response burden and questionnaire length. With an increasing number of questions, respondents can become bored, tired, and annoyed and may take shortcuts to reduce the effort needed to complete the survey. In this article, direct evidence is presented on how the position of items within a web…
Descriptors: Online Surveys, Test Items, Test Format, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Kunz, Tanja; Meitinger, Katharina – Field Methods, 2022
Although list-style open-ended questions generally help us gain deeper insights into respondents' thoughts, opinions, and behaviors, the quality of responses is often compromised. We tested a dynamic and a follow-up design to motivate respondents to give higher quality responses than with a static design, but without overburdening them. Our…
Descriptors: Online Surveys, Item Response Theory, Test Items, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Cui, Zhongmin; He, Yong – Measurement: Interdisciplinary Research and Perspectives, 2023
Careful considerations are necessary when there is a need to choose an anchor test form from a list of old test forms for equating under the random groups design. The choice of the anchor form potentially affects the accuracy of equated scores on new test forms. Few guidelines, however, can be found in the literature on choosing the anchor form.…
Descriptors: Test Format, Equated Scores, Best Practices, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Kmetty, Zoltán; Stefkovics, Ádám – International Journal of Social Research Methodology, 2022
Questionnaire design choices, such as handling 'do not know' answers or using check all that apply questions may impact the level of item- and unit nonresponse. We used experimental data from an online panel and the questions of ESS to examine how applying forced answering, offering DK/NA options and specific question formats (CATA vs. forced…
Descriptors: Item Response Theory, Test Format, Response Rates (Questionnaires), Questionnaires
Jiajing Huang – ProQuest LLC, 2022
The nonequivalent-groups anchor-test (NEAT) data-collection design is commonly used in large-scale assessments. Under this design, different test groups take different test forms. Each test form has its own unique items and all test forms share a set of common items. If item response theory (IRT) models are applied to analyze the test data, the…
Descriptors: Item Response Theory, Test Format, Test Items, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Wim J. van der Linden; Luping Niu; Seung W. Choi – Journal of Educational and Behavioral Statistics, 2024
A test battery with two different levels of adaptation is presented: a within-subtest level for the selection of the items in the subtests and a between-subtest level to move from one subtest to the next. The battery runs on a two-level model consisting of a regular response model for each of the subtests extended with a second level for the joint…
Descriptors: Adaptive Testing, Test Construction, Test Format, Test Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Kyung-Mi O. – Language Testing in Asia, 2024
This study examines the efficacy of artificial intelligence (AI) in creating parallel test items compared to human-made ones. Two test forms were developed: one consisting of 20 existing human-made items and another with 20 new items generated with ChatGPT assistance. Expert reviews confirmed the content parallelism of the two test forms.…
Descriptors: Comparative Analysis, Artificial Intelligence, Computer Software, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Gerring, John; Pemstein, Daniel; Skaaning, Svend-Erik – Sociological Methods & Research, 2021
A key obstacle to measurement is the aggregation problem. Where indicators tap into common latent traits in theoretically meaningful ways, the problem may be solved by applying a data-informed ("inductive") measurement model, for example, factor analysis, structural equation models, or item response theory. Where they do not, researchers…
Descriptors: Test Construction, Measures (Individuals), Concept Formation, Social Science Research
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Wolkowitz, Amanda A.; Foley, Brett; Zurn, Jared – Practical Assessment, Research & Evaluation, 2023
The purpose of this study is to introduce a method for converting scored 4-option multiple-choice (MC) items into scored 3-option MC items without re-pretesting the 3-option MC items. This study describes a six-step process for achieving this goal. Data from a professional credentialing exam was used in this study and the method was applied to 24…
Descriptors: Multiple Choice Tests, Test Items, Accuracy, Test Format
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sayin, Ayfer; Sata, Mehmet – International Journal of Assessment Tools in Education, 2022
The aim of the present study was to examine Turkish teacher candidates' competency levels in writing different types of test items by utilizing Rasch analysis. In addition, the effect of the expertise of the raters scoring the items written by the teacher candidates was examined within the scope of the study. 84 Turkish teacher candidates…
Descriptors: Foreign Countries, Item Response Theory, Evaluators, Expertise
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Duru, Erdinc; Ozgungor, Sevgi; Yildirim, Ozen; Duatepe-Paksu, Asuman; Duru, Sibel – International Journal of Assessment Tools in Education, 2022
The aim of this study is to develop a valid and reliable measurement tool that measures critical thinking skills of university students. Pamukkale Critical Thinking Skills Scale was developed as two separate forms; multiple choice and open-ended. The validity and reliability studies of the multiple-choice form were constructed on two different…
Descriptors: Critical Thinking, Cognitive Measurement, Test Validity, Test Reliability
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Wang, Lin – ETS Research Report Series, 2019
Rearranging response options in different versions of a test of multiple-choice items can be an effective strategy against cheating on the test. This study investigated if rearranging response options would affect item performance and test score comparability. A study test was assembled as the base version from which 3 variant versions were…
Descriptors: Multiple Choice Tests, Test Items, Test Format, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Sohee; Cole, Ki Lynn; Mwavita, Mwarumba – International Journal of Testing, 2018
This study investigated the effects of linking potentially multidimensional test forms using the fixed item parameter calibration. Forms had equal or unequal total test difficulty with and without confounding difficulty. The mean square errors and bias of estimated item and ability parameters were compared across the various confounding tests. The…
Descriptors: Test Items, Item Response Theory, Test Format, Difficulty Level
Peer reviewed Peer reviewed
Direct linkDirect link
Raymond, Mark R.; Stevens, Craig; Bucak, S. Deniz – Advances in Health Sciences Education, 2019
Research suggests that the three-option format is optimal for multiple choice questions (MCQs). This conclusion is supported by numerous studies showing that most distractors (i.e., incorrect answers) are selected by so few examinees that they are essentially nonfunctional. However, nearly all studies have defined a distractor as nonfunctional if…
Descriptors: Multiple Choice Tests, Credentials, Test Format, Test Items
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5