Publication Date
| In 2026 | 0 |
| Since 2025 | 220 |
| Since 2022 (last 5 years) | 1089 |
| Since 2017 (last 10 years) | 2599 |
| Since 2007 (last 20 years) | 4960 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 653 |
| Teachers | 563 |
| Researchers | 250 |
| Students | 201 |
| Administrators | 81 |
| Policymakers | 22 |
| Parents | 17 |
| Counselors | 8 |
| Community | 7 |
| Support Staff | 3 |
| Media Staff | 1 |
| More ▼ | |
Location
| Turkey | 226 |
| Canada | 223 |
| Australia | 155 |
| Germany | 116 |
| United States | 99 |
| China | 90 |
| Florida | 86 |
| Indonesia | 82 |
| Taiwan | 78 |
| United Kingdom | 73 |
| California | 66 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 4 |
| Meets WWC Standards with or without Reservations | 4 |
| Does not meet standards | 1 |
Moreira, Paulo A. S.; Machado Vaz, Filipa; Dias, Paulo C.; Petracchi, Paulo – Canadian Journal of School Psychology, 2009
Student engagement is an emergent research domain in educational psychology, as research increasingly supports the connection between academic achievement, school-related behaviours, and student engagement. In spite of the important role of student engagement in academic achievement across cultures, little is known about the cross-cultural…
Descriptors: Educational Psychology, Factor Structure, Psychometrics, Foreign Countries
Ariel, Robert; Dunlosky, John; Bailey, Heather – Journal of Experimental Psychology: General, 2009
Theories of self-regulated study assume that learners monitor item difficulty when making decisions about which items to select for study. To complement such theories, the authors propose an agenda-based regulation (ABR) model in which learners' study decisions are guided by an agenda that learners develop to prioritize items for study, given…
Descriptors: Test Items, Time Management, Item Analysis, Rewards
Pyle, Katie; Jones, Emily; Williams, Chris; Morrison, Jo – Educational Research, 2009
Background: All national curriculum tests in England are pre-tested as part of the development process. Differences in pupil performance between pre-test and live test are consistently found. This difference has been termed the pre-test effect. Understanding the pre-test effect is essential in the test development and selection processes and in…
Descriptors: Foreign Countries, Pretesting, Context Effect, National Curriculum
Ding, Kele; Olds, R. Scott; Thombs, Dennis L. – Journal of Drug Education, 2009
This retrospective case study assessed the influence of item non-response error on subsequent response to questionnaire items assessing adolescent alcohol and marijuana use. Post-hoc analyses were conducted on survey results obtained from 4,371 7th to 12th grade students in Ohio in 2005. A skip pattern design in a conventional questionnaire…
Descriptors: Substance Abuse, Marijuana, Measures (Individuals), Secondary School Students
von Davier, Alina A.; Fournier-Zajac, Stephanie; Holland, Paul W. – ETS Research Report Series, 2007
In the nonequivalent groups with anchor test (NEAT) design, there are several ways to use the information provided by the anchor in the equating process. One of the NEAT-design equating methods is the linear observed-score Levine method (Kolen & Brennan, 2004). It is based on a classical test theory model of the true scores on the test forms…
Descriptors: Equated Scores, Statistical Analysis, Test Items, Test Theory
van der Linden, Wim J. – Psychometrika, 2007
Current modeling of response times on test items has been strongly influenced by the paradigm of experimental reaction-time research in psychology. For instance, some of the models have a parameter structure that was chosen to represent a speed-accuracy tradeoff, while others equate speed directly with response time. Also, several response-time…
Descriptors: Test Items, Reaction Time, Markov Processes, Item Response Theory
Penfield, Randall D. – Applied Measurement in Education, 2007
A widely used approach for categorizing the level of differential item functioning (DIF) in dichotomous items is the scheme proposed by Educational Testing Service (ETS) based on a transformation of the Mantel-Haeszel common odds ratio. In this article two classification schemes for DIF in polytomous items (referred to as the P1 and P2 schemes)…
Descriptors: Simulation, Educational Testing, Test Bias, Evaluation Methods
Braeken, Johan; Tuerlinckx, Francis; De Boeck, Paul – Psychometrika, 2007
Most item response theory models are not robust to violations of conditional independence. However, several modeling approaches (e.g., conditioning on other responses, additional random effects) exist that try to incorporate local item dependencies, but they have some drawbacks such as the nonreproducibility of marginal probabilities and resulting…
Descriptors: Probability, Item Response Theory, Test Items, Psychometrics
Grisay, Aletta; Monseur, Christian – Studies in Educational Evaluation, 2007
In this article, data from the Reading Literacy study conducted in 2000 and 2001 as part of the OECD "Programme for International Student Assessment" (PISA) were analysed in order to explore equivalence issues across the 47 test adaptations in various languages of instruction that were used by the participating countries. On average,…
Descriptors: Cross Cultural Studies, Test Items, Literacy, Language of Instruction
Kim, Seock-Ho; Cohen, Allan S.; DiStefano, Christine A.; Kim, Sooyeon – 1998
Type I error rates of the likelihood ratio test for the detection of differential item functioning (DIF) in the partial credit model were investigated using simulated data. The partial credit model with four ordered performance levels was used to generate data sets of a 30-item test for samples of 300 and 1,000 simulated examinees. Three different…
Descriptors: Item Bias, Simulation, Test Items
Raju, Nambury S.; Arenson, Ethan – 2002
An alternative method of finding a common metric for separate calibrations through the use of a common (anchor) set of items is presented. Based on Raju's (1988) method of calculating the area between the two item response functions, this (area-minimization) method minimizes the sum of the squared exact unsigned areas of each of the common items.…
Descriptors: Item Response Theory, Test Items
Deng, Hui; Ansley, Timothy N. – 2000
This study provided preliminary results about the performance of the DIMTEST statistical procedure for detecting multidimensionality with data simulated from both compensatory and noncompensatory models under a latent structure where all items in a test were influenced by the same two abilities. For the first case, data were simulated to reflect…
Descriptors: Simulation, Test Construction, Test Items
Peer reviewedChristensen, Karl Bang; Bjorner, Jakob Bue; Kriner, Svend; Petersen, Jorgen Holm – Psychometrika, 2002
Considers tests of unidimensionality in polytomous Rasch models against a specified alternative, given by a partition of the items into subgroups, that are believed to measure different dimensions of the latent construct. Uses data from an occupational health study to motivate and illustrate the methods. (SLD)
Descriptors: Item Response Theory, Test Items
Peer reviewedLittle, Todd D.; Cunningham, William A.; Shahar, Golan; Widaman, Keith F. – Structural Equation Modeling, 2002
Studied the evidence for the practice of using parcels of item as manifest variables in structural equation modeling procedures. Findings suggest that the unconsidered use of parcels is never warranted, but the considered use of parcels cannot be dismissed out of hand. Describes a number of parceling techniques and their strengths and weaknesses.…
Descriptors: Structural Equation Models, Test Items
Peer reviewedIp, Edward Hak-Sing – Psychometrika, 2000
Provides a general method that adjusts for the inflation of information associated with a test containing item clusters and a computational scheme for the evaluation of the factors of adjustment for clusters in the restrictive and general cases. Illustrates the approach with an analysis of National Assessment of Educational Progress data. (SLD)
Descriptors: Cluster Analysis, Correlation, Test Items

Direct link
