Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 6 |
Since 2016 (last 10 years) | 13 |
Since 2006 (last 20 years) | 27 |
Descriptor
College Students | 54 |
Test Format | 54 |
Test Validity | 42 |
Higher Education | 25 |
Test Reliability | 22 |
Foreign Countries | 21 |
Test Construction | 18 |
Language Tests | 16 |
Second Language Learning | 14 |
English (Second Language) | 13 |
Scores | 13 |
More ▼ |
Source
Author
Publication Type
Reports - Research | 49 |
Journal Articles | 43 |
Speeches/Meeting Papers | 5 |
Numerical/Quantitative Data | 3 |
Reports - Descriptive | 3 |
Tests/Questionnaires | 3 |
Dissertations/Theses -… | 1 |
Reports - Evaluative | 1 |
Education Level
Higher Education | 27 |
Postsecondary Education | 21 |
Audience
Location
Turkey | 3 |
China | 2 |
Germany | 2 |
Japan | 2 |
Asia | 1 |
Bangladesh | 1 |
Canada | 1 |
China (Shanghai) | 1 |
Georgia | 1 |
Hong Kong | 1 |
Iran | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Jang, Jung Un; Kim, Eun Joo – Journal of Curriculum and Teaching, 2022
This study conducts the validity of the pen-and-paper and smart-device-based tests on optician's examination. The developed questions for each media were based on the national optician's simulation test. The subjects of this study were 60 students enrolled in E University. The data analysis was performed to verify the equivalence of the two…
Descriptors: Optometry, Licensing Examinations (Professions), Test Format, Test Validity
Duru, Erdinc; Ozgungor, Sevgi; Yildirim, Ozen; Duatepe-Paksu, Asuman; Duru, Sibel – International Journal of Assessment Tools in Education, 2022
The aim of this study is to develop a valid and reliable measurement tool that measures critical thinking skills of university students. Pamukkale Critical Thinking Skills Scale was developed as two separate forms; multiple choice and open-ended. The validity and reliability studies of the multiple-choice form were constructed on two different…
Descriptors: Critical Thinking, Cognitive Measurement, Test Validity, Test Reliability
Papenberg, Martin; Diedenhofen, Birk; Musch, Jochen – Journal of Experimental Education, 2021
Testwiseness may introduce construct-irrelevant variance to multiple-choice test scores. Presenting response options sequentially has been proposed as a potential solution to this problem. In an experimental validation, we determined the psychometric properties of a test based on the sequential presentation of response options. We created a strong…
Descriptors: Test Wiseness, Test Validity, Test Reliability, Multiple Choice Tests
Isler, Cemre; Aydin, Belgin – International Journal of Assessment Tools in Education, 2021
This study is about the development and validation process of the Computerized Oral Proficiency Test of English as a Foreign Language (COPTEFL). The test aims at assessing the speaking proficiency levels of students in Anadolu University School of Foreign Languages (AUSFL). For this purpose, three monologic tasks were developed based on the Global…
Descriptors: Test Construction, Construct Validity, Interrater Reliability, Scores
Al-Jarf, Reima – Online Submission, 2023
This article aims to give a comprehensive guide to planning and designing vocabulary tests which include Identifying the skills to be covered by the test; outlining the course content covered; preparing a table of specifications that shows the skill, content topics and number of questions allocated to each; and preparing the test instructions. The…
Descriptors: Vocabulary Development, Learning Processes, Test Construction, Course Content
Zhang, Xian; Liu, Jianda; Ai, Haiyang – Language Testing, 2020
The main purpose of this study is to investigate guessing in the Yes/No (YN) format vocabulary test. One-hundred-and-five university students took a YN test, a translation task and a multiple-choice vocabulary size test (MC VST). With matched lexical properties between the real words and the pseudowords, pseudowords could index guessing in the YN…
Descriptors: Vocabulary Development, Language Tests, Test Format, College Students
Ben Seipel; Sarah E. Carlson; Virginia Clinton-Lisell; Mark L. Davison; Patrick C. Kennedy – Grantee Submission, 2022
Originally designed for students in Grades 3 through 5, MOCCA (formerly the Multiple-choice Online Causal Comprehension Assessment), identifies students who struggle with comprehension, and helps uncover why they struggle. There are many reasons why students might not comprehend what they read. They may struggle with decoding, or reading words…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Diagnostic Tests, Reading Tests
Al-Adawi, Sharifa S. A.; Al-Balushi, Aaisha A. K. – English Language Teaching, 2016
An English placement test (PT) is an essential component of any foundation program. It helps place students into their suitable language proficiency level so that they do not spend time learning materials below or above their levels. It also helps teachers to prepare teaching materials to students of similar levels (Brown, 2004; Illinois, 2012).…
Descriptors: Content Validity, Test Validity, Language Tests, Computer Assisted Testing
Kadhm, Sherouk J. – International Journal of Adult Vocational Education and Technology, 2017
This study aimed to examine the psychometric properties (reliability and validity) of the Arabic version of Sherouk's Critical Thinking Test. This test has four parts, each of which provides a story that is divided into an introduction and a scene; each story is then followed by a list of sensitive questions featuring two response options…
Descriptors: Cognitive Tests, Critical Thinking, Thinking Skills, Psychometrics
Polat, Murat – Novitas-ROYAL (Research on Youth and Language), 2020
Classroom practices, materials and teaching methods in language classes have changed a lot in the last decades and continue to evolve; however, the commonly used techniques to test students' foreign language skills have not changed much regardless of the recent awareness in Bloom's taxonomy. Testing units at schools rely mostly on multiple choice…
Descriptors: Multiple Choice Tests, Test Format, Test Items, Difficulty Level
Lim, Hyojung – Language Testing in Asia, 2019
Background: This study aims to empirically answer the question of whether the role of sub-reading skills changes depending on the test format (e.g., multiple-choice vs. open-ended reading questions). The test format effect also addresses the issue of test validity--whether the reading test properly elicits construct-relevant reading skills or…
Descriptors: Foreign Countries, Test Format, Language Tests, English (Second Language)
Zhang, Li-Fang – Educational Psychology, 2016
To overcome the major weakness in the response format of the Defense Mechanisms Inventory and to use the information most relevant to the population concerned in the present study, an alternative form of the Defense Mechanisms Inventory (DMI-AF) was designed. The 80 Likert-scaled items in the inventory were tested among 385 university students in…
Descriptors: Foreign Countries, Defense Mechanisms, Likert Scales, College Students
Culligan, Brent – Language Testing, 2015
This study compared three common vocabulary test formats, the Yes/No test, the Vocabulary Knowledge Scale (VKS), and the Vocabulary Levels Test (VLT), as measures of vocabulary difficulty. Vocabulary difficulty was defined as the item difficulty estimated through Item Response Theory (IRT) analysis. Three tests were given to 165 Japanese students,…
Descriptors: Language Tests, Test Format, Comparative Analysis, Vocabulary
Tarun, Prashant; Krueger, Dale – Journal of Learning in Higher Education, 2016
In the United States System of Education the growth of student evaluations from 1973 to 1993 has increased from 29% to 86% which in turn has increased the importance of student evaluations on faculty retention, tenure, and promotion. However, the impact student evaluations have had on student academic development generates complex educational…
Descriptors: Critical Thinking, Teaching Methods, Multiple Choice Tests, Essay Tests
Dutke, Stephan; Barenberg, Jonathan – Psychology Learning and Teaching, 2015
We introduce a specific type of item for knowledge tests, confidence-weighted true-false (CTF) items, and review experiences of its application in psychology courses. A CTF item is a statement about the learning content to which students respond whether the statement is true or false, and they rate their confidence level. Previous studies using…
Descriptors: Foreign Countries, College Students, Psychology, Objective Tests