Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 3 |
Since 2006 (last 20 years) | 5 |
Descriptor
Objective Tests | 7 |
Statistical Analysis | 7 |
Test Items | 7 |
Multiple Choice Tests | 3 |
Correlation | 2 |
Foreign Countries | 2 |
Item Analysis | 2 |
Memory | 2 |
Retention (Psychology) | 2 |
Science Tests | 2 |
Testing | 2 |
More ▼ |
Source
Assessment & Evaluation in… | 1 |
College Student Journal | 1 |
Education Sciences | 1 |
Journal of Chemical Education | 1 |
Journal of Educational… | 1 |
ProQuest LLC | 1 |
Author
Burkett, Allan R. | 1 |
Daughtry, Don | 1 |
Ganzfried, Sam | 1 |
Gopal, Arpita | 1 |
Keiffer, Elizabeth Ann | 1 |
Kelly, William E. | 1 |
Pan, Steven C. | 1 |
Rickard, Timothy C. | 1 |
Schaap, Lydia | 1 |
Schmidt, Henk | 1 |
Sevenair, John P. | 1 |
More ▼ |
Publication Type
Journal Articles | 5 |
Reports - Research | 5 |
Dissertations/Theses -… | 1 |
Reports - Descriptive | 1 |
Reports - Evaluative | 1 |
Speeches/Meeting Papers | 1 |
Tests/Questionnaires | 1 |
Education Level
Higher Education | 3 |
Postsecondary Education | 3 |
Audience
Location
Netherlands | 2 |
California | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Kelly, William E.; Daughtry, Don – College Student Journal, 2018
This study developed an abbreviated form of Barron's (1953) Ego Strength Scale for use in research among college student samples. A version of Barron's scale was administered to 100 undergraduate college students. Using item-total score correlations and internal consistency, the scale was reduced to 18 items (Es18). The Es18 possessed adequate…
Descriptors: Undergraduate Students, Self Concept Measures, Test Length, Scores
Ganzfried, Sam; Yusuf, Farzana – Education Sciences, 2018
A problem faced by many instructors is that of designing exams that accurately assess the abilities of the students. Typically, these exams are prepared several days in advance, and generic question scores are used based on rough approximation of the question difficulty and length. For example, for a recent class taught by the author, there were…
Descriptors: Weighted Scores, Test Construction, Student Evaluation, Multiple Choice Tests
Pan, Steven C.; Gopal, Arpita; Rickard, Timothy C. – Journal of Educational Psychology, 2016
Does correctly answering a test question about a multiterm fact enhance memory for the entire fact? We explored that issue in 4 experiments. Subjects first studied Advanced Placement History or Biology facts. Half of those facts were then restudied, whereas the remainder were tested using "5 W" (i.e., "who, what, when, where",…
Descriptors: Undergraduate Students, Testing, Test Items, Memory
Schaap, Lydia; Verkoeijen, Peter; Schmidt, Henk – Assessment & Evaluation in Higher Education, 2014
This study investigated the effects of two different true-false questions on memory awareness and long-term retention of knowledge. Participants took four subsequent knowledge tests on curriculum learning material that they studied at different retention intervals prior to the start of this study (i.e. prior to the first test). At the first and…
Descriptors: Objective Tests, Test Items, Memory, Long Term Memory
Keiffer, Elizabeth Ann – ProQuest LLC, 2011
A differential item functioning (DIF) simulation study was conducted to explore the type and level of impact that contamination had on type I error and power rates in DIF analyses when the suspect item favored the same or opposite group as the DIF items in the matching subtest. Type I error and power rates were displayed separately for the…
Descriptors: Test Items, Sample Size, Simulation, Identification

Sevenair, John P.; Burkett, Allan R. – Journal of Chemical Education, 1988
Describes statistical analyses of tests used for organic chemistry classes and attempts to pose a model to explain the results. Concluded that students who possess a slight grasp of a concept actually have less of a chance of answering an item correctly than those who merely guess. (CW)
Descriptors: Chemistry, College Science, Higher Education, Item Analysis
de Jong, John H. A. L. – 1983
The Rasch model for test analysis is a latent-trait model, which specifies the relationship between observable test performance and the unobservable traits or abilities assumed under test performance. In most cases, the test constructor has no clue as to whether the latent traits postulated by the model are indeed the abilities he wants to…
Descriptors: Ability Grouping, Cloze Procedure, Correlation, English (Second Language)