NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 7 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Baghaei, Samira; Bagheri, Mohammad Sadegh; Yamini, Mortaza – Cogent Education, 2020
The main purpose of this quantitative-qualitative content analysis study was to compare IELTS and TOEFL listening and reading tests based on the representation of the learning objectives of Revised Bloom's taxonomy. To this end, 12 Academic IELTS listening and reading tests and 12 TOEFL iBT listening and reading tests were analyzed qualitatively…
Descriptors: Second Language Learning, English (Second Language), Language Tests, Reading Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Cawthon, Stephanie – American Annals of the Deaf, 2011
Linguistic complexity of test items is one test format element that has been studied in the context of struggling readers and their participation in paper-and-pencil tests. The present article presents findings from an exploratory study on the potential relationship between linguistic complexity and test performance for deaf readers. A total of 64…
Descriptors: Language Styles, Test Content, Syntax, Linguistics
Peer reviewed Peer reviewed
Direct linkDirect link
Sawaki, Yasuyo; Kim, Hae-Jin; Gentile, Claudia – Language Assessment Quarterly, 2009
In cognitive diagnosis a Q-matrix (Tatsuoka, 1983, 1990), which is an incidence matrix that defines the relationships between test items and constructs of interest, has great impact on the nature of performance feedback that can be provided to score users. The purpose of the present study was to identify meaningful skill coding categories that…
Descriptors: Feedback (Response), Test Items, Test Content, Identification
Chalifour, Clark; Powers, Donald E. – 1988
In actual test development practice, the number of test items that must be developed and pretested is typically greater, and sometimes much greater, than the number eventually judged suitable for use in operational test forms. This has proven to be especially true for analytical reasoning items, which currently form the bulk of the analytical…
Descriptors: Coding, Difficulty Level, Higher Education, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Schroeder, Carolyn M.; Scott, Timothy P.; Tolson, Homer; Huang, Tse-Yang; Lee, Yi-Hsuan – Journal of Research in Science Teaching, 2007
This project consisted of a meta-analysis of U.S. research published from 1980 to 2004 on the effect of specific science teaching strategies on student achievement. The six phases of the project included study acquisition, study coding, determination of intercoder objectivity, establishing criteria for inclusion of studies, computation of effect…
Descriptors: Test Format, Test Content, Academic Achievement, Meta Analysis
Yepes-Baraya, Mario – 1997
The study described in this paper is part of an effort to improve understanding of the science assessment of the National Assessment of Educational Progress (NAEP). It involved the coding of all the items in the 1996 NAEP science assessments, which included 45 blocks (15 each for grades 4, 8, and 12) and over 500 items. Each of the approximately…
Descriptors: Coding, Elementary School Students, Grade 4, Intermediate Grades
Buck, Gary; Kostin, Irene; Morgan, Rick – College Board, 2002
The purpose of this study is to examine the content of the questions in a number of Advanced Placement Examinations and to attempt to identify content that is related to gender-based performance differences. Free-response questions for ten forms of the APĀ® Exams in U.S. History, European History, Biology, Microeconomics, and Macroeconomics were…
Descriptors: Test Content, Gender Differences, Correlation, Test Items