Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 3 |
| Since 2017 (last 10 years) | 5 |
| Since 2007 (last 20 years) | 6 |
Descriptor
| Difficulty Level | 14 |
| Test Items | 14 |
| Test Use | 14 |
| Test Construction | 6 |
| Item Analysis | 5 |
| Achievement Tests | 3 |
| Foreign Countries | 3 |
| Latent Trait Theory | 3 |
| Test Content | 3 |
| Test Format | 3 |
| Academic Standards | 2 |
| More ▼ | |
Source
| Educational Measurement:… | 3 |
| Educational Assessment | 1 |
| Ministerial Council on… | 1 |
| Studies in Second Language… | 1 |
| TESL Canada Journal | 1 |
Author
| Arikan, Serkan | 1 |
| Aybek, Eren Can | 1 |
| Dodds, Jeffrey | 1 |
| Donovan, Jenny | 1 |
| Frisbie, David A. | 1 |
| Holland, Paul W. | 1 |
| Hutton, Penny | 1 |
| Isbell, Daniel R. | 1 |
| Katz, Irvin R. | 1 |
| Lennon, Melissa | 1 |
| Lunz, Mary E. | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 6 |
| Reports - Evaluative | 5 |
| Reports - Research | 4 |
| Speeches/Meeting Papers | 4 |
| Reports - Descriptive | 3 |
| Information Analyses | 2 |
| Numerical/Quantitative Data | 1 |
| Opinion Papers | 1 |
Education Level
| Elementary Secondary Education | 2 |
| Elementary Education | 1 |
| Grade 6 | 1 |
| Junior High Schools | 1 |
| Middle Schools | 1 |
| Secondary Education | 1 |
Audience
| Researchers | 1 |
Location
| Alabama | 1 |
| Australia | 1 |
| Canada | 1 |
| Indiana | 1 |
| Kansas | 1 |
| Massachusetts | 1 |
| Michigan | 1 |
| Minnesota | 1 |
| New Jersey | 1 |
| Ohio | 1 |
| Oregon | 1 |
| More ▼ | |
Laws, Policies, & Programs
| Education Consolidation… | 1 |
Assessments and Surveys
| National Assessment of… | 1 |
| Program for International… | 1 |
| Test of English as a Foreign… | 1 |
| Test of English for… | 1 |
| Trends in International… | 1 |
What Works Clearinghouse Rating
Arikan, Serkan; Aybek, Eren Can – Educational Measurement: Issues and Practice, 2022
Many scholars compared various item discrimination indices in real or simulated data. Item discrimination indices, such as item-total correlation, item-rest correlation, and IRT item discrimination parameter, provide information about individual differences among all participants. However, there are tests that aim to select a very limited number…
Descriptors: Monte Carlo Methods, Item Analysis, Correlation, Individual Differences
Isbell, Daniel R.; Son, Young-A – Studies in Second Language Acquisition, 2022
Elicited Imitation Tests (EITs) are commonly used in second language acquisition (SLA)/bilingualism research contexts to assess the general oral proficiency of study participants. While previous studies have provided valuable EIT construct-related validity evidence, some key gaps remain. This study uses an integrative data analysis to further…
Descriptors: Bilingualism, Imitation, Language Tests, Second Language Learning
Stewart, Gail; Strachan, Andrea – TESL Canada Journal, 2022
Since its implementation in 2004, the Canadian English Language Benchmark Assessment for Nurses (CELBAN) has been accepted as evidence of language ability for licensure of internationally educated nurses (IENs) in Canada. This article focuses on the complexities of sustaining an occupation-specific assessment over time. The authors reference the…
Descriptors: Language Tests, English for Special Purposes, Benchmarking, Nurses
Sinharay, Sandip – Educational Measurement: Issues and Practice, 2018
The choice of anchor tests is crucial in applications of the nonequivalent groups with anchor test design of equating. Sinharay and Holland (2006, 2007) suggested "miditests," which are anchor tests that are content-representative and have the same mean item difficulty as the total test but have a smaller spread of item difficulties.…
Descriptors: Test Content, Difficulty Level, Test Items, Test Construction
Traynor, Anne – Educational Assessment, 2017
Variation in test performance among examinees from different regions or national jurisdictions is often partially attributed to differences in the degree of content correspondence between local school or training program curricula, and the test of interest. This posited relationship between test-curriculum correspondence, or "alignment,"…
Descriptors: Test Items, Test Construction, Alignment (Education), Curriculum
Holland, Paul W. – 1989
A simple technique, developed by A. Phillips (1987) is used to approximate the covariance between the Mantel-Haenszel log-odds-ratio estimator for a 2 x 2 x k table and the sample marginal proportions. These results are then applied to obtain an approximate variance estimate of an adjusted risk difference based on the Mantel-Haenszel odds-ratio…
Descriptors: Difficulty Level, Estimation (Mathematics), Item Bias, Risk
Thorndike, Robert L. – 1983
In educational testing, one is concerned to get as much information as possible about a given examinee from each minute of testing time. Maximum information is obtained when the difficulty of each test exercise matches the estimated ability level of the examinee. The goal of adaptive testing is to accomplish this. Adaptive patterns are reviewed…
Descriptors: Adaptive Testing, Computer Assisted Testing, Difficulty Level, Latent Trait Theory
Mathieu, Cindy K. – 1997
This paper presents six steps in test construction generally recommended by measurement textbook authors. The focus is primarily on paper-and-pencil achievement tests as used by class instructions, although the discussion touches on the construction of other types of assessment. The six steps are: (1) determine the test purpose; (2) determine the…
Descriptors: Achievement Tests, Difficulty Level, Measurement Techniques, Selection
Dodds, Jeffrey – 1999
Basic precepts for test development are described and explained as they are presented in measurement textbooks commonly used in the fields of education and psychology. The five building blocks discussed as the foundation of well-constructed tests are: (1) specification of purpose; (2) standard conditions; (3) consistency; (4) validity; and (5)…
Descriptors: Difficulty Level, Educational Research, Grading, Higher Education
Martinez, Michael E.; Katz, Irvin R. – 1992
Contrasts between constructed response items and stem-equivalent multiple-choice counterparts typically have involved averaging item characteristics, and this aggregation has masked differences in statistical properties at the item level. Moreover, even aggregated format differences have not been explained in terms of differential cognitive…
Descriptors: Architecture, Cognitive Processes, Construct Validity, Constructed Response
Peer reviewedFrisbie, David A. – Educational Measurement: Issues and Practice, 1992
Literature related to the multiple true-false (MTF) item format is reviewed. Each answer cluster of a MTF item may have several true items and the correctness of each is judged independently. MTF tests appear efficient and reliable, although they are a bit harder than multiple choice items for examinees. (SLD)
Descriptors: Achievement Tests, Difficulty Level, Literature Reviews, Multiple Choice Tests
Ziomek, Robert L.; Wright, Benjamin D. – 1984
Techniques such as the norm-referenced and average score techniques, commonly used in the identification of educationally disadvantaged students, are critiqued. This study applied latent trait theory, specifically the Rasch Model, along with teacher judgments relative to the mastery of instructional/test decisions, to derive a standard setting…
Descriptors: Cutting Scores, Difficulty Level, Educationally Disadvantaged, Intermediate Grades
Lunz, Mary E.; And Others – 1989
A method for understanding and controlling the multiple facets of an oral examination (OE) or other judge-intermediated examination is presented and illustrated. This study focused on determining the extent to which the facets model (FM) analysis constructs meaningful variables for each facet of an OE involving protocols, examiners, and…
Descriptors: Computer Software, Difficulty Level, Evaluators, Examiners
Wu, Margaret; Donovan, Jenny; Hutton, Penny; Lennon, Melissa – Ministerial Council on Education, Employment, Training and Youth Affairs (NJ1), 2008
In July 2001, the Ministerial Council on Education, Employment, Training and Youth Affairs (MCEETYA) agreed to the development of assessment instruments and key performance measures for reporting on student skills, knowledge and understandings in primary science. It directed the newly established Performance Measurement and Reporting Taskforce…
Descriptors: Foreign Countries, Scientific Literacy, Science Achievement, Comparative Analysis

Direct link
