Publication Date
| In 2026 | 0 |
| Since 2025 | 1 |
| Since 2022 (last 5 years) | 1 |
| Since 2017 (last 10 years) | 2 |
| Since 2007 (last 20 years) | 7 |
Descriptor
Source
Author
| McGinty, Dixie | 2 |
| Nadas, Rita | 2 |
| Neel, John H. | 2 |
| Anderson, Lorin W. | 1 |
| Biancarosa, Gina | 1 |
| Brookhart, Susan M. | 1 |
| Carlson, Sarah E. | 1 |
| Clinton, Virginia | 1 |
| Davison, Mark L. | 1 |
| De Avila, Edward A. | 1 |
| Duncan, Sharon E. | 1 |
| More ▼ | |
Publication Type
| Reports - Research | 9 |
| Journal Articles | 7 |
| Speeches/Meeting Papers | 4 |
| Reports - Evaluative | 3 |
| Guides - General | 2 |
| Reports - Descriptive | 2 |
| Books | 1 |
| Guides - Classroom - Teacher | 1 |
| Information Analyses | 1 |
| Tests/Questionnaires | 1 |
Education Level
| Elementary Education | 2 |
| Elementary Secondary Education | 2 |
| Higher Education | 2 |
| Postsecondary Education | 2 |
| Grade 5 | 1 |
| High Schools | 1 |
| Secondary Education | 1 |
Audience
| Teachers | 1 |
Location
| Iran | 1 |
| United Kingdom | 1 |
Laws, Policies, & Programs
| No Child Left Behind Act 2001 | 1 |
Assessments and Surveys
| Comprehensive Tests of Basic… | 1 |
| National Assessment of… | 1 |
| Test of English as a Foreign… | 1 |
What Works Clearinghouse Rating
Sara T. Cushing – ETS Research Report Series, 2025
This report provides an in-depth comparison of TOEFL iBT® and the Duolingo English Test (DET) in terms of the degree to which both tests assess academic language proficiency in listening, reading, writing, and speaking. The analysis is based on publicly available documentation on both tests, including sample test questions available on the test…
Descriptors: Language Tests, English (Second Language), Second Language Learning, Academic Language
Carlson, Sarah E.; Seipel, Ben; Biancarosa, Gina; Davison, Mark L.; Clinton, Virginia – Grantee Submission, 2019
This demonstration introduces and presents an innovative online cognitive diagnostic assessment, developed to identify the types of cognitive processes that readers use during comprehension; specifically, processes that distinguish between subtypes of struggling comprehenders. Cognitive diagnostic assessments are designed to provide valuable…
Descriptors: Reading Comprehension, Standardized Tests, Diagnostic Tests, Computer Assisted Testing
Suto, Irenka; Nadas, Rita – Research in Education, 2010
Our aim was to deepen understanding of public examinations, exploring how marking task demands influence examiners' cognition and ultimately their marking accuracy. To do this, we identified features of examinations that trigger or demand the use of cognitive marking strategies entailing "reflective" judgements. Kelly's Repertory Grid…
Descriptors: Scoring, Accuracy, Examiners, Cognitive Processes
Brookhart, Susan M. – ASCD, 2010
Don't settle for assessing recall and comprehension only when you can use this guide to create assessments for higher-order thinking skills. Assessment expert Susan M. Brookhart brings you up to speed on how to develop and use test questions and other assessments that reveal how well your students can analyze, reason, solve problems, and think…
Descriptors: Test Items, Performance Based Assessment, Thinking Skills, Cognitive Processes
Shahnazari-Dorcheh, Mohammadtaghi; Roshan, Saeed – English Language Teaching, 2012
Due to the lack of span test for the use in language-specific and cross-language studies, this study provides L1 and L2 researchers with a reliable language-independent span test (math span test) for the measurement of working memory capacity. It also describes the development, validation, and scoring method of this test. This test included 70…
Descriptors: Language Research, Native Language, Second Language Learning, Scoring
Suto, W. M. Irenka; Nadas, Rita – Research Papers in Education, 2009
It has long been established that marking accuracy in public examinations varies considerably among subjects and markers. This is unsurprising, given the diverse cognitive strategies that the marking process can entail, but what makes some questions harder to mark accurately than others? Are there distinct but subtle features of questions and…
Descriptors: National Curriculum, Physics, Interviews, Examiners
Hein, Serge F.; Skaggs, Gary E. – Applied Measurement in Education, 2009
Only a small number of qualitative studies have investigated panelists' experiences during standard-setting activities or the thought processes associated with panelists' actions. This qualitative study involved an examination of the experiences of 11 panelists who participated in a prior, one-day standard-setting meeting in which either the…
Descriptors: Focus Groups, Standard Setting, Cutting Scores, Cognitive Processes
Anderson, Lorin W. – 1999
This paper describes a work in progress on a second edition of "Taxonomy of Educational Objectives, The Classification of Educational Goals, Handbook I: Cognitive Domain," also known as "Bloom's Taxonomy" (B. Bloom and others, Eds., 1956). The new edition will be grounded in the collective wisdom of the original…
Descriptors: Classification, Cognitive Processes, Educational Assessment, Educational Objectives
Peer reviewedJannarone, Robert J. – Psychometrika, 1986
Conjunctive item response models are introduced such that: (1) sufficient statistics for latent traits are not necessarily additive in item scores; (2) items are not necessarily locally independent; and (3) existing compensatory (additive) item response models including the binomial, Rasch, logistic, and general locally independent model are…
Descriptors: Cognitive Processes, Hypothesis Testing, Latent Trait Theory, Mathematical Models
Peer reviewedFeldt, Leonard S. – Measurement & Evaluation in Counseling & Development, 2004
In some settings, the validity of a battery composite or a test score is enhanced by weighting some parts or items more heavily than others in the total score. This article describes methods of estimating the total score reliability coefficient when differential weights are used with items or parts.
Descriptors: Test Items, Scoring, Cognitive Processes, Test Validity
Tatsuoka, Kikumi K. – 1991
Constructed-response formats are desired for measuring complex and dynamic response processes that require the examinee to understand the structures of problems and micro-level cognitive tasks. These micro-level tasks and their organized structures are usually unobservable. This study shows that elementary graph theory is useful for organizing…
Descriptors: Adult Literacy, Cognitive Measurement, Cognitive Processes, Constructed Response
Yen, Wendy M. – 1982
The three-parameter logistic model discussed was used by CTB/McGraw-Hill in the development of the Comprehensive Tests of Basic Skills, Form U (CTBS/U) and the Test of Cognitive Skills (TCS), published in the fall of 1981. The development, standardization, and scoring of the tests are described, particularly as these procedures were influenced by…
Descriptors: Achievement Tests, Bayesian Statistics, Cognitive Processes, Data Collection
McGinty, Dixie; Neel, John H. – 1996
A new standard setting approach is introduced, called the cognitive components approach. Like the Angoff method, the cognitive components method generates minimum pass levels (MPLs) for each item. In both approaches, the item MPLs are summed for each judge, then averaged across judges to yield the standard. In the cognitive components approach,…
Descriptors: Cognitive Processes, Criterion Referenced Tests, Evaluation Methods, Grade 3
McGinty, Dixie; Neel, John H.; Hsu, Yu-Sheng – 1996
The cognitive components standard setting method, recently introduced by D. McGinty and J. Neel (1996), asks judges to specify minimum levels of performance not for the test items, but for smaller portions of items, the component skills and concepts required to answer each item correctly. Items are decomposed into these components before judges…
Descriptors: Cognitive Processes, Criterion Referenced Tests, Elementary Education, Evaluation Methods
Pearson, P. David; Garavaglia, Diane R. – National Center for Education Statistics, 2003
The purpose of this essay is to explore both what is known and what needs to be learned about the information value of performance items "when they are used in large scale assessments." Within the context of the National Assessment of Educational Progress (NAEP), there is substantial motivation for answering these questions. Over the…
Descriptors: Measurement, National Competency Tests, Test Items, Performance
Previous Page | Next Page »
Pages: 1 | 2
Direct link
