Publication Date
In 2025 | 3 |
Since 2024 | 4 |
Since 2021 (last 5 years) | 9 |
Since 2016 (last 10 years) | 16 |
Since 2006 (last 20 years) | 19 |
Descriptor
Psychometrics | 19 |
Test Format | 19 |
Foreign Countries | 10 |
Test Items | 10 |
Item Response Theory | 7 |
Scores | 7 |
Test Validity | 7 |
College Students | 6 |
Test Reliability | 6 |
Computer Assisted Testing | 5 |
Correlation | 5 |
More ▼ |
Source
Author
Publication Type
Journal Articles | 14 |
Reports - Research | 14 |
Reports - Evaluative | 2 |
Tests/Questionnaires | 2 |
Dissertations/Theses -… | 1 |
Non-Print Media | 1 |
Reference Materials -… | 1 |
Reference Materials - General | 1 |
Reports - Descriptive | 1 |
Education Level
Higher Education | 19 |
Postsecondary Education | 19 |
High Schools | 2 |
Secondary Education | 2 |
Audience
Location
Canada | 2 |
Indonesia | 2 |
Chile | 1 |
China (Shanghai) | 1 |
Germany | 1 |
Iowa | 1 |
Iran | 1 |
Louisiana | 1 |
Missouri | 1 |
North Dakota | 1 |
Oman | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Sen, Sedat – Creativity Research Journal, 2022
The purpose of this study was to estimate the overall reliability values for the scores produced by Runco Ideational Behavior Scale (RIBS) and explore the variability of RIBS score reliability across studies. To achieve this, a reliability generalization meta-analysis was carried out using the 86 Cronbach's alpha estimates obtained from 77 studies…
Descriptors: Generalization, Creativity, Meta Analysis, Higher Education
Dongmei Li; Shalini Kapoor; Ann Arthur; Chi-Yu Huang; YoungWoo Cho; Chen Qiu; Hongling Wang – ACT Education Corp., 2025
Starting in April 2025, ACT will introduce enhanced forms of the ACT® test for national online testing, with a full rollout to all paper and online test takers in national, state and district, and international test administrations by Spring 2026. ACT introduced major updates by changing the test lengths and testing times, providing more time per…
Descriptors: College Entrance Examinations, Testing, Change, Scoring
Celeste Combrinck – SAGE Open, 2024
We have less time and focus than ever before, while the demand for attention is increasing. Therefore, it is no surprise that when answering questionnaires, we often choose to strongly agree or be neutral, producing problematic and unusable data. The current study investigated forced-choice (ipsative) format compared to the same questions on a…
Descriptors: Likert Scales, Test Format, Surveys, Design
Mimi Ismail; Ahmed Al - Badri; Said Al - Senaidi – Journal of Education and e-Learning Research, 2025
This study aimed to reveal the differences in individuals' abilities, their standard errors, and the psychometric properties of the test according to the two methods of applying the test (electronic and paper). The descriptive approach was used to achieve the study's objectives. The study sample consisted of 74 male and female students at the…
Descriptors: Achievement Tests, Computer Assisted Testing, Psychometrics, Item Response Theory
Muhammed Parviz; Masoud Azizi – Discover Education, 2025
This article offers a critical review of the Ministry of Science, Research, and Technology English Proficiency Test (MSRT), a high-stakes exam required for postgraduate graduation, scholarships, and certain employment positions in Iran. Despite its widespread use, the design and implementation of the MSRT raise concerns about its validity and…
Descriptors: Language Tests, Language Proficiency, English (Second Language), Second Language Learning
Fadillah, Sarah Meilani; Ha, Minsu; Nuraeni, Eni; Indriyanti, Nurma Yunita – Malaysian Journal of Learning and Instruction, 2023
Purpose: Researchers discovered that when students were given the opportunity to change their answers, a majority changed their responses from incorrect to correct, and this change often increased the overall test results. What prompts students to modify their answers? This study aims to examine the modification of scientific reasoning test, with…
Descriptors: Science Tests, Multiple Choice Tests, Test Items, Decision Making
Calderón Carvajal, Carlos; Ximénez Gómez, Carmen; Lay-Lisboa, Siu; Briceño, Mauricio – Journal of Psychoeducational Assessment, 2021
Kolb's Learning Style Inventory (LSI) continues to generate a great debate among researchers, given the contradictory evidence resulting from its psychometric properties. One primary criticism focuses on the artificiality of the results derived from its internal structure because of the ipsative nature of the forced-choice format. This study seeks…
Descriptors: Factor Structure, Psychometrics, Test Format, Test Validity
Papenberg, Martin; Diedenhofen, Birk; Musch, Jochen – Journal of Experimental Education, 2021
Testwiseness may introduce construct-irrelevant variance to multiple-choice test scores. Presenting response options sequentially has been proposed as a potential solution to this problem. In an experimental validation, we determined the psychometric properties of a test based on the sequential presentation of response options. We created a strong…
Descriptors: Test Wiseness, Test Validity, Test Reliability, Multiple Choice Tests
Vispoel, Walter Peter; Morris, Carrie Ann; Sun, Linan – Journal of Experimental Education, 2019
In two independent studies of questionnaire administration, respondents completed multidimensional self-concept inventories within four randomized research conditions that mirrored the most common administration formats used in practice: paper booklets with and without answer sheets and computer questionnaires with single versus multiple items per…
Descriptors: Self Concept Measures, Computer Assisted Testing, Questionnaires, Psychometrics
Keng, Leslie; Boyer, Michelle – National Center for the Improvement of Educational Assessment, 2020
ACT requested assistance from the National Center for the Improvement of Educational Assessment (Center for Assessment) to investigate declines of scores for states administering the ACT to its 11th grade students in 2018. This request emerged from conversations among state leaders, the Center for Assessment, and ACT in trying to understand the…
Descriptors: College Entrance Examinations, Scores, Test Score Decline, Educational Trends
Bearman, Margaret; Ajjawi, Rola – Advances in Health Sciences Education, 2018
The Objective Structured Clinical Examination (OSCE) is a ubiquitous part of medical education, although there is some debate about its value, particularly around possible impact on learning. Literature and research regarding the OSCE is most often situated within the psychometric or competency discourses of assessment. This paper describes an…
Descriptors: Psychometrics, Medical Education, Medical Students, Interpersonal Relationship
Agustinus Hardi Prasetyo – ProQuest LLC, 2023
Studies have shown that language assessment literacy (LAL) is important for language teachers since they make important classroom decisions to improve student learning based on their assessment. However, some studies have shown that teachers need more knowledge and skills in assessment. Teachers also seem unconfident in assessing their students…
Descriptors: Language Tests, English (Second Language), Second Language Learning, Second Language Instruction
Papanastasiou, Elena C. – Practical Assessment, Research & Evaluation, 2015
If good measurement depends in part on the estimation of accurate item characteristics, it is essential that test developers become aware of discrepancies that may exist on the item parameters before and after item review. The purpose of this study was to examine the answer changing patterns of students while taking paper-and-pencil multiple…
Descriptors: Psychometrics, Difficulty Level, Test Items, Multiple Choice Tests
Zhang, Li-Fang – Educational Psychology, 2016
To overcome the major weakness in the response format of the Defense Mechanisms Inventory and to use the information most relevant to the population concerned in the present study, an alternative form of the Defense Mechanisms Inventory (DMI-AF) was designed. The 80 Likert-scaled items in the inventory were tested among 385 university students in…
Descriptors: Foreign Countries, Defense Mechanisms, Likert Scales, College Students
Gierl, Mark J.; Lai, Hollis; Pugh, Debra; Touchie, Claire; Boulais, André-Philippe; De Champlain, André – Applied Measurement in Education, 2016
Item development is a time- and resource-intensive process. Automatic item generation integrates cognitive modeling with computer technology to systematically generate test items. To date, however, items generated using cognitive modeling procedures have received limited use in operational testing situations. As a result, the psychometric…
Descriptors: Psychometrics, Multiple Choice Tests, Test Items, Item Analysis
Previous Page | Next Page »
Pages: 1 | 2