Publication Date
| In 2026 | 0 |
| Since 2025 | 200 |
| Since 2022 (last 5 years) | 1070 |
| Since 2017 (last 10 years) | 2580 |
| Since 2007 (last 20 years) | 4941 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 653 |
| Teachers | 563 |
| Researchers | 250 |
| Students | 201 |
| Administrators | 81 |
| Policymakers | 22 |
| Parents | 17 |
| Counselors | 8 |
| Community | 7 |
| Support Staff | 3 |
| Media Staff | 1 |
| More ▼ | |
Location
| Turkey | 225 |
| Canada | 223 |
| Australia | 155 |
| Germany | 116 |
| United States | 99 |
| China | 90 |
| Florida | 86 |
| Indonesia | 82 |
| Taiwan | 78 |
| United Kingdom | 73 |
| California | 65 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 4 |
| Meets WWC Standards with or without Reservations | 4 |
| Does not meet standards | 1 |
Yang, Yanyun; Chen, Yi-Hsin; Lo, Wen-Juo; Turner, Jeannine E. – Journal of Psychoeducational Assessment, 2012
Previous studies have shown that method effects associated with item wording produce artifactual factors and threaten scale validity. This study examines item wording effects on a scale of attitudes toward learning mathematics for Taiwanese and U.S. samples. Analyses from a series of CFA (confirmatory factor analysis) models support the presence…
Descriptors: Attitude Measures, Cross Cultural Studies, Test Items, Grade 4
Jiao, Hong; Kamata, Akihito; Wang, Shudong; Jin, Ying – Journal of Educational Measurement, 2012
The applications of item response theory (IRT) models assume local item independence and that examinees are independent of each other. When a representative sample for psychometric analysis is selected using a cluster sampling method in a testlet-based assessment, both local item dependence and local person dependence are likely to be induced.…
Descriptors: Item Response Theory, Test Items, Markov Processes, Monte Carlo Methods
Soh, Kay Cheng – Higher Education Review, 2012
Three university ranking systems in vogue have been shown in the previous issue of "Higher Education Review" to be capable of modifications to make them more parsimonious by using only about half of the number of predictors currently in use. This makes some of the predictors "redundant" as they contributed little to the overall ranking. It is…
Descriptors: Higher Education, Predictor Variables, Profiles, Test Items
Naeem, Naghma; van der Vleuten, Cees; Alfaris, Eiad Abdelmohsen – Advances in Health Sciences Education, 2012
The quality of items written for in-house examinations in medical schools remains a cause of concern. Several faculty development programs are aimed at improving faculty's item writing skills. The purpose of this study was to evaluate the effectiveness of a faculty development program in item development. An objective method was developed and used…
Descriptors: Evidence, Check Lists, Test Items, Medical Schools
Wong, Khoon Yoong; Oh, Kwang-Shin; Ng, Qiu Ting Yvonne; Cheong, Jim Siew Kuan – Teaching Mathematics and Its Applications: An International Journal of the IMA, 2012
The purposes of an online system to auto-mark students' responses to mathematics test items are to expedite the marking process, to enhance consistency in marking and to alleviate teacher assessment workload. We propose that a semi-automatic marking and customizable feedback system better serves pedagogical objectives than a fully automatic one.…
Descriptors: Feedback (Response), Test Items, Mathematics Tests, Online Systems
Carter, Merilyn – Australian Mathematics Teacher, 2012
Is the time allowed for National Assessment Program Literacy and Numeracy (NAPLAN) numeracy tests sufficient? Is there any evidence that students run out of time and, if so, what are the implications for teachers who prepare students for NAPLAN numeracy tests? In each of the 2010 Years 7 and 9 numeracy tests students were required to complete 32…
Descriptors: Evidence, Test Items, Numeracy, Time Factors (Learning)
Beretvas, S. Natasha; Cawthon, Stephanie W.; Lockhart, L. Leland; Kaye, Alyssa D. – Educational and Psychological Measurement, 2012
This pedagogical article is intended to explain the similarities and differences between the parameterizations of two multilevel measurement model (MMM) frameworks. The conventional two-level MMM that includes item indicators and models item scores (Level 1) clustered within examinees (Level 2) and the two-level cross-classified MMM (in which item…
Descriptors: Test Bias, Comparative Analysis, Test Items, Difficulty Level
Navas, Patricia; Verdugo, Miguel A.; Arias, Benito; Gomez, Laura E. – Research in Developmental Disabilities: A Multidisciplinary Journal, 2012
Although adaptive behavior became a diagnostic criterion in the 5th edition of the American Association on Intellectual and Developmental Disabilities, AAIDD (Heber, 1959, 1961), there are no measures with adequate psychometric properties for diagnosing significant limitations in adaptive behavior according to the current conception of the…
Descriptors: Intelligence, Test Items, Mental Retardation, Developmental Disabilities
Van Dam, Nicholas T.; Hobkirk, Andrea L.; Danoff-Burg, Sharon; Earleywine, Mitch – Assessment, 2012
Mindfulness, a construct that entails moment-to-moment effort to be aware of present experiences and positive attitudinal features, has become integrated into the sciences. The Five Facet Mindfulness Questionnaire (FFMQ), one popular measure of mindfulness, exhibits different responses to positively and negatively worded items in nonmeditating…
Descriptors: Factor Structure, Measures (Individuals), Factor Analysis, Questionnaires
Kim, Sooyeon; Walker, Michael – Applied Measurement in Education, 2012
This study examined the appropriateness of the anchor composition in a mixed-format test, which includes both multiple-choice (MC) and constructed-response (CR) items, using subpopulation invariance indices. Linking functions were derived in the nonequivalent groups with anchor test (NEAT) design using two types of anchor sets: (a) MC only and (b)…
Descriptors: Multiple Choice Tests, Test Format, Test Items, Equated Scores
Chae, Soo Eun; Kim, Doyoung; Han, Jae-Ho – IEEE Transactions on Education, 2012
Those items or test characteristics that are likely to result in differential item functioning (DIF) across accommodated test forms in statewide tests have received little attention. An examination of elementary-level student performance across accommodated test forms in a large-scale mathematics assessment revealed DIF variations by grades,…
Descriptors: Test Bias, Mathematics Tests, Testing Accommodations, Elementary School Mathematics
Crotts, Katrina; Sireci, Stephen G.; Zenisky, April – Journal of Applied Testing Technology, 2012
Validity evidence based on test content is important for educational tests to demonstrate the degree to which they fulfill their purposes. Most content validity studies involve subject matter experts (SMEs) who rate items that comprise a test form. In computerized-adaptive testing, examinees take different sets of items and test "forms"…
Descriptors: Computer Assisted Testing, Adaptive Testing, Content Validity, Test Content
McAllister, Daniel; Guidice, Rebecca M. – Teaching in Higher Education, 2012
The primary goal of teaching is to successfully facilitate learning. Testing can help accomplish this goal in two ways. First, testing can provide a powerful motivation for students to prepare when they perceive that the effort involved leads to valued outcomes. Second, testing can provide instructors with valuable feedback on whether their…
Descriptors: Testing, Role, Student Motivation, Feedback (Response)
Breakstone, Joel – Theory and Research in Social Education, 2014
This article considers the design process for new formative history assessments. Over the course of 3 years, my colleagues from the Stanford History Education Group and I designed, piloted, and revised dozens of "History Assessments of Thinking" (HATs). As we created HATs, we sought to gather information about their cognitive validity,…
Descriptors: History Instruction, Formative Evaluation, Tests, Correlation
Salcedo, Audy – Statistics Education Research Journal, 2014
This study presents the results of the analysis of a group of teacher-made test questions for statistics courses at the university level. Teachers were asked to submit tests they had used in their previous two semesters. Ninety-seven tests containing 978 questions were gathered and classified according to the SOLO taxonomy (Biggs & Collis,…
Descriptors: Statistics, Mathematics Tests, Test Items, Test Content

Peer reviewed
Direct link
