Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 3 |
Since 2006 (last 20 years) | 7 |
Descriptor
Scoring | 49 |
Test Items | 49 |
Test Use | 49 |
Test Construction | 27 |
Testing Programs | 13 |
Test Format | 12 |
State Programs | 11 |
Test Results | 11 |
Achievement Tests | 10 |
Computer Assisted Testing | 10 |
Foreign Countries | 10 |
More ▼ |
Source
Author
Donovan, Jenny | 3 |
Lennon, Melissa | 3 |
Bennett, Randy Elliot | 2 |
Hutton, Penny | 2 |
Martinez, Michael E. | 2 |
Morrissey, Noni | 2 |
O'Connor, Gayl | 2 |
Ackerman, Terry A. | 1 |
Alderson, J. Charles | 1 |
Beetham, James | 1 |
Biancarosa, Gina | 1 |
More ▼ |
Publication Type
Education Level
Elementary Secondary Education | 7 |
Elementary Education | 6 |
Grade 6 | 4 |
Secondary Education | 2 |
Grade 10 | 1 |
High Schools | 1 |
Higher Education | 1 |
Postsecondary Education | 1 |
Audience
Practitioners | 10 |
Teachers | 9 |
Parents | 3 |
Students | 3 |
Administrators | 1 |
Counselors | 1 |
Location
Australia | 5 |
Arizona | 4 |
Pennsylvania | 3 |
Canada | 1 |
Tennessee | 1 |
Laws, Policies, & Programs
Comprehensive Education… | 1 |
Elementary and Secondary… | 1 |
National Defense Education Act | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
New Meridian Corporation, 2020
New Meridian Corporation has developed the "Quality Testing Standards and Criteria for Comparability Claims" (QTS) to provide guidance to states that are interested in including New Meridian content and would like to either keep reporting scores on the New Meridian Scale or use the New Meridian performance levels; that is, the state…
Descriptors: Testing, Standards, Comparative Analysis, Test Content
Carlson, Sarah E.; Seipel, Ben; Biancarosa, Gina; Davison, Mark L.; Clinton, Virginia – Grantee Submission, 2019
This demonstration introduces and presents an innovative online cognitive diagnostic assessment, developed to identify the types of cognitive processes that readers use during comprehension; specifically, processes that distinguish between subtypes of struggling comprehenders. Cognitive diagnostic assessments are designed to provide valuable…
Descriptors: Reading Comprehension, Standardized Tests, Diagnostic Tests, Computer Assisted Testing
International Journal of Testing, 2019
These guidelines describe considerations relevant to the assessment of test takers in or across countries or regions that are linguistically or culturally diverse. The guidelines were developed by a committee of experts to help inform test developers, psychometricians, test users, and test administrators about fairness issues in support of the…
Descriptors: Test Bias, Student Diversity, Cultural Differences, Language Usage
National Council on Measurement in Education, 2012
Testing and data integrity on statewide assessments is defined as the establishment of a comprehensive set of policies and procedures for: (1) the proper preparation of students; (2) the management and administration of the test(s) that will lead to accurate and appropriate reporting of assessment results; and (3) maintaining the security of…
Descriptors: State Programs, Integrity, Testing, Test Preparation

Gelin, Michaela N.; Zumbo, Bruno D. – Educational and Psychological Measurement, 2003
Investigated potentially biased scale items on the Center for Epidemiological Studies Depression scale (CES-D; Radloff, 1977) in a sample of 600 adults. Overall, results indicate that the scoring method has an effect on differential item functioning (DIF), and that DIF is a property of the item, scoring method, and purpose of the assessment. (SLD)
Descriptors: Depression (Psychology), Item Bias, Scoring, Test Items
Wainer, Howard; Thissen, David – 1992
If examinees are permitted to choose to answer a subset of the questions on a test, just knowing which questions were chosen can provide a measure of proficiency that may be as reliable as would have been obtained from the test graded traditionally. This new method of scoring is much less time consuming and expensive for both the examinee and the…
Descriptors: Adaptive Testing, Cost Effectiveness, Responses, Scoring

Crites, John O.; Savickas, Mark L. – Journal of Career Assessment, 1996
The Career Maturity Inventory was revised in 1995 using previously unpublished longitudinal data for item selection. The new inventory has 25 attitude and 25 competence items, each yielding a score that measures degree of career maturity of conative and cognitive variables, respectively. (SK)
Descriptors: Career Development, Measures (Individuals), Scoring, Test Interpretation
Segall, Daniel O. – 1999
Two new methods for improving the measurement precision of a general test factor are proposed and evaluated. One new method provides a multidimensional item response theory estimate obtained from conventional administrations of multiple-choice test items that span general and nuisance dimensions. The other method chooses items adaptively to…
Descriptors: Ability, Adaptive Testing, Item Response Theory, Measurement Techniques

Henderson, Metta Lou – American Journal of Pharmaceutical Education, 1984
The uses, advantages and disadvantages, preparation, and scoring of essay tests and oral tests are outlined and discussed, and sample questions of each type oriented to pharmaceutical instruction are provided. (MSE)
Descriptors: Essay Tests, Higher Education, Pharmaceutical Education, Scoring
Parsons, Jim; Fenwick, Tara – 1999
This "toolbox" offers suggestions about how and when to create objective tests. Such tests are sometimes a quick way to find out how students are doing, and sometimes they help students focus on what they are doing in class or help teachers define the content that is worth knowing. The following suggestions are offered for developing objective…
Descriptors: Elementary Secondary Education, Evaluation Methods, Foreign Countries, Objective Tests
Drasgow, Fritz, Ed.; Olson-Buchanan, Julie B., Ed. – 1999
Chapters in this book present the challenges and dilemmas faced by researchers as they created new computerized assessments, focusing on issues addressed in developing, scoring, and administering the assessments. Chapters are: (1) "Beyond Bells and Whistles; An Introduction to Computerized Assessment" (Julie B. Olson-Buchanan and Fritz Drasgow);…
Descriptors: Adaptive Testing, Computer Assisted Testing, Elementary Secondary Education, Scoring
Ackerman, Terry A. – 1991
Many researchers have suggested that the main cause of item bias is the misspecification of the latent ability space. That is, items that measure multiple abilities are scored as though they are measuring a single ability. If two different groups of examinees have different underlying multidimensional ability distributions and the test items are…
Descriptors: Equations (Mathematics), Item Bias, Item Response Theory, Mathematical Models
Carlson, Sybil B.; Ward, William C. – 1988
Issues concerning the cost and feasibility of using Formulating Hypotheses (FH) test item types for the Graduate Record Examinations have slowed research into their use. This project focused on two major issues that need to be addressed in considering FH items for operational use: the costs of scoring and the assignment of scores along a range of…
Descriptors: Adaptive Testing, Computer Assisted Testing, Costs, Pilot Projects
Bennett, Randy Elliot – 1990
A new assessment conception is described that integrates constructed-response testing, artificial intelligence, and model-based measurement. The conception incorporates complex constructed-response items for their potential to increase the validity, instructional utility, and credibility of standardized tests. Artificial intelligence methods are…
Descriptors: Artificial Intelligence, Constructed Response, Educational Assessment, Measurement Techniques
Martinez, Michael E.; And Others – 1990
Large-scale testing is dominated by the multiple-choice question format. Widespread use of the format is due, in part, to the ease with which multiple-choice items can be scored automatically. This paper examines automatic scoring procedures for an alternative item type: figural response. Figural response items call for the completion or…
Descriptors: Automation, Computer Assisted Testing, Educational Technology, Multiple Choice Tests