Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 5 |
Since 2016 (last 10 years) | 10 |
Since 2006 (last 20 years) | 24 |
Descriptor
Ability | 55 |
Difficulty Level | 55 |
Test Items | 55 |
Item Response Theory | 29 |
Test Construction | 14 |
Adaptive Testing | 9 |
Probability | 9 |
Computer Assisted Testing | 8 |
Simulation | 8 |
Equated Scores | 7 |
Guessing (Tests) | 7 |
More ▼ |
Source
Author
Lord, Frederic M. | 2 |
O'Neill, Thomas R. | 2 |
Reckase, Mark D. | 2 |
Ackerman, Terry A. | 1 |
Akin-Arikan, Çigdem | 1 |
Bechger, Timo | 1 |
Berger, Martijn P. F. | 1 |
Bergstrom, Betty A. | 1 |
Bickel, Peter | 1 |
Bjermo, Jonas | 1 |
Buyske, Steven | 1 |
More ▼ |
Publication Type
Reports - Research | 31 |
Journal Articles | 26 |
Speeches/Meeting Papers | 17 |
Reports - Evaluative | 14 |
Dissertations/Theses -… | 5 |
Information Analyses | 3 |
Reports - Descriptive | 2 |
Guides - Non-Classroom | 1 |
Education Level
Elementary Education | 4 |
Higher Education | 2 |
Secondary Education | 2 |
Early Childhood Education | 1 |
Grade 3 | 1 |
Grade 4 | 1 |
Grade 7 | 1 |
Intermediate Grades | 1 |
Junior High Schools | 1 |
Middle Schools | 1 |
Postsecondary Education | 1 |
More ▼ |
Audience
Researchers | 1 |
Students | 1 |
Teachers | 1 |
Laws, Policies, & Programs
Assessments and Surveys
ACT Assessment | 1 |
Connecticut Mastery Testing… | 1 |
Raven Progressive Matrices | 1 |
What Works Clearinghouse Rating
Semih Asiret; Seçil Ömür Sünbül – International Journal of Psychology and Educational Studies, 2023
In this study, it was aimed to examine the effect of missing data in different patterns and sizes on test equating methods under the NEAT design for different factors. For this purpose, as part of this study, factors such as sample size, average difficulty level difference between the test forms, difference between the ability distribution,…
Descriptors: Research Problems, Data, Test Items, Equated Scores
The Effect of Item Pools of Different Strengths on the Test Results of Computerized-Adaptive Testing
Kezer, Fatih – International Journal of Assessment Tools in Education, 2021
Item response theory provides various important advantages for exams carried out or to be carried out digitally. For computerized adaptive tests to be able to make valid and reliable predictions supported by IRT, good quality item pools should be used. This study examines how adaptive test applications vary in item pools which consist of items…
Descriptors: Item Banks, Adaptive Testing, Computer Assisted Testing, Item Response Theory
Akin-Arikan, Çigdem; Gelbal, Selahattin – Eurasian Journal of Educational Research, 2021
Purpose: This study aims to compare the performances of Item Response Theory (IRT) equating and kernel equating (KE) methods based on equating errors (RMSD) and standard error of equating (SEE) using the anchor item nonequivalent groups design. Method: Within this scope, a set of conditions, including ability distribution, type of anchor items…
Descriptors: Equated Scores, Item Response Theory, Test Items, Statistical Analysis
Bjermo, Jonas; Miller, Frank – Applied Measurement in Education, 2021
In recent years, the interest in measuring growth in student ability in various subjects between different grades in school has increased. Therefore, good precision in the estimated growth is of importance. This paper aims to compare estimation methods and test designs when it comes to precision and bias of the estimated growth of mean ability…
Descriptors: Scaling, Ability, Computation, Test Items
Kim, Hun Ju; Lee, Sung Ja; Kam, Kyung-Yoon – International Journal of Disability, Development and Education, 2023
This study verified validity and reliability of the School Function Assessment (SFA) using Rasch analysis in South Korean school-based occupational therapy sites serving children with intellectual disabilities and others. Participants were 103 elementary school children (grades 1 through 6) with disabilities. Rasch analysis revealed several…
Descriptors: Foreign Countries, Test Validity, Test Reliability, Occupational Therapy
Sunbul, Onder; Yormaz, Seha – Eurasian Journal of Educational Research, 2018
Purpose: Several studies can be found in the literature that investigate the performance of ? under various conditions. However no study for the effects of item difficulty, item discrimination, and ability restrictions on the performance of ? could be found. The current study aims to investigate the performance of ? for the conditions given below.…
Descriptors: Test Items, Difficulty Level, Ability, Cheating
Reddick, Rachel – International Educational Data Mining Society, 2019
One significant challenge in the field of measuring ability is measuring the current ability of a learner while they are learning. Many forms of inference become computationally complex in the presence of time-dependent learner ability, and are not feasible to implement in an online context. In this paper, we demonstrate an approach which can…
Descriptors: Measurement Techniques, Mathematics, Assignments, Learning
Sun, Sumin; Schweizer, Karl; Ren, Xuezhu – Journal of Cognition and Development, 2019
This study examined whether there is a developmental difference in the emergence of an item-position effect in intelligence testing. The item-position effect describes the dependency of the item's characteristics on the positions of the items and is explained by learning. Data on fluid intelligence measured by Raven's Standard Progressive Matrices…
Descriptors: Intelligence Tests, Test Items, Difficulty Level, Short Term Memory
DeMars, Christine E.; Jurich, Daniel P. – Educational and Psychological Measurement, 2015
In educational testing, differential item functioning (DIF) statistics must be accurately estimated to ensure the appropriate items are flagged for inspection or removal. This study showed how using the Rasch model to estimate DIF may introduce considerable bias in the results when there are large group differences in ability (impact) and the data…
Descriptors: Test Bias, Guessing (Tests), Ability, Differences
Çokluk, Ömay; Gül, Emrah; Dogan-Gül, Çilem – Educational Sciences: Theory and Practice, 2016
The study aims to examine whether differential item function is displayed in three different test forms that have item orders of random and sequential versions (easy-to-hard and hard-to-easy), based on Classical Test Theory (CTT) and Item Response Theory (IRT) methods and bearing item difficulty levels in mind. In the correlational research, the…
Descriptors: Test Bias, Test Items, Difficulty Level, Test Theory
Goldhammer, Frank – Measurement: Interdisciplinary Research and Perspectives, 2015
The main challenge of ability tests relates to the difficulty of items, whereas speed tests demand that test takers complete very easy items quickly. This article proposes a conceptual framework to represent how performance depends on both between-person differences in speed and ability and the speed-ability compromise within persons. Related…
Descriptors: Ability, Aptitude Tests, Reaction Time, Test Items
Schoen, Robert C.; Yang, Xiaotong; Liu, Sicong; Paek, Insu – Grantee Submission, 2017
The Early Fractions Test v2.2 is a paper-pencil test designed to measure mathematics achievement of third- and fourth-grade students in the domain of fractions. The purpose, or intended use, of the Early Fractions Test v2.2 is to serve as a measure of student outcomes in a randomized trial designed to estimate the effect of an educational…
Descriptors: Psychometrics, Mathematics Tests, Mathematics Achievement, Fractions
Shulruf, Boaz; Jones, Phil; Turner, Rolf – Higher Education Studies, 2015
The determination of Pass/Fail decisions over Borderline grades, (i.e., grades which do not clearly distinguish between the competent and incompetent examinees) has been an ongoing challenge for academic institutions. This study utilises the Objective Borderline Method (OBM) to determine examinee ability and item difficulty, and from that…
Descriptors: Undergraduate Students, Pass Fail Grading, Decision Making, Probability
Matlock, Ki Lynn – ProQuest LLC, 2013
When test forms that have equal total test difficulty and number of items vary in difficulty and length within sub-content areas, an examinee's estimated score may vary across equivalent forms, depending on how well his or her true ability in each sub-content area aligns with the difficulty of items and number of items within these areas.…
Descriptors: Test Items, Difficulty Level, Ability, Test Content
Store, Davie – ProQuest LLC, 2013
The impact of particular types of context effects on actual scores is less understood although there has been some research carried out regarding certain types of context effects under the nonequivalent anchor test (NEAT) design. In addition, the issue of the impact of item context effects on scores has not been investigated extensively when item…
Descriptors: Test Items, Equated Scores, Accuracy, Item Response Theory