NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 8 results Save | Export
Razmjoo, Seyyed Ayatollah; Heydari Tabrizi, Hossein – Journal of Pan-Pacific Association of Applied Linguistics, 2010
The MA Entrance Examinations (MAEE) held in Iran since 1990 are frequently criticized as being invalid, unstandardized exams with lots of problem in terms of principles of testing in general and test construction in particular (for instance, Jafarpur, 1996). To make sound judgments about such objections, the present study dealt with a content…
Descriptors: Testing, Language Tests, Validity, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Fox, Connie; Zhu, Weimo; Park, Youngsik; Fisette, Jennifer L.; Graber, Kim C.; Dyson, Ben; Avery, Marybell; Franck, Marian; Placek, Judith H.; Rink, Judy; Raynes, De – Measurement in Physical Education and Exercise Science, 2011
In addition to validity and reliability evidence, other psychometric qualities of the PE Metrics assessments needed to be examined. This article describes how those critical psychometric issues were addressed during the PE Metrics assessment bank construction. Specifically, issues included (a) number of items or assessments needed, (b) training…
Descriptors: Measures (Individuals), Psychometrics, Interrater Reliability, Training
OECD Publishing (NJ1), 2012
The "PISA 2009 Technical Report" describes the methodology underlying the PISA 2009 survey. It examines additional features related to the implementation of the project at a level of detail that allows researchers to understand and replicate its analyses. The reader will find a wealth of information on the test and sample design,…
Descriptors: Quality Control, Research Reports, Research Methodology, Evaluation Criteria
Chalifour, Clark; Powers, Donald E. – 1988
In actual test development practice, the number of test items that must be developed and pretested is typically greater, and sometimes much greater, than the number eventually judged suitable for use in operational test forms. This has proven to be especially true for analytical reasoning items, which currently form the bulk of the analytical…
Descriptors: Coding, Difficulty Level, Higher Education, Test Construction
Stocking, Martha L.; And Others – 1991
A previously developed method of automatically selecting items for inclusion in a test subject to constraints on item content and statistical properties is applied to real data. Two tests are first assembled by experts in test construction who normally assemble such tests on a routine basis. Using the same pool of items and constraints articulated…
Descriptors: Algorithms, Automation, Coding, Computer Assisted Testing
Longford, Nicholas T. – 1994
This study is a critical evaluation of the roles for coding and scoring of missing responses to multiple-choice items in educational tests. The focus is on tests in which the test-takers have little or no motivation; in such tests omitting and not reaching (as classified by the currently adopted operational rules) is quite frequent. Data from the…
Descriptors: Algorithms, Classification, Coding, Models
Emmerich, Walter – 1989
The aim of this study was to develop a procedure that could be used to appraise the cognitive features of subject (achievement) tests. Cognitive taxonomies and an accompanying coding scheme were developed and applied to the Graduate Record Examinations subject tests in Psychology and Literature in English. The taxonomies were based on the manifest…
Descriptors: Achievement Tests, Classification, Coding, Cognitive Processes
Hecht, Jeffrey B.; And Others – 1993
A method of qualitative data analysis that used computer software as a tool to help organize and analyze open-ended survey responses was examined. Reasons for using open-ended, as opposed to closed-ended questionnaire items, are discussed, as well as the construction of open-ended questions and response analysis. Because the method is based on…
Descriptors: Attitude Change, Coding, Computer Assisted Testing, Computer Software