Publication Date
| In 2026 | 0 |
| Since 2025 | 197 |
| Since 2022 (last 5 years) | 1067 |
| Since 2017 (last 10 years) | 2577 |
| Since 2007 (last 20 years) | 4938 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 653 |
| Teachers | 563 |
| Researchers | 250 |
| Students | 201 |
| Administrators | 81 |
| Policymakers | 22 |
| Parents | 17 |
| Counselors | 8 |
| Community | 7 |
| Support Staff | 3 |
| Media Staff | 1 |
| More ▼ | |
Location
| Turkey | 225 |
| Canada | 223 |
| Australia | 155 |
| Germany | 116 |
| United States | 99 |
| China | 90 |
| Florida | 86 |
| Indonesia | 82 |
| Taiwan | 78 |
| United Kingdom | 73 |
| California | 65 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 4 |
| Meets WWC Standards with or without Reservations | 4 |
| Does not meet standards | 1 |
Ting, Mu Yu – EURASIA Journal of Mathematics, Science & Technology Education, 2017
Using the capabilities of expert knowledge structures, the researcher prepared test questions on the university calculus topic of "finding the area by integration." The quiz is divided into two types of multiple choice items (one out of four and one out of many). After the calculus course was taught and tested, the results revealed that…
Descriptors: Calculus, Mathematics Instruction, College Mathematics, Multiple Choice Tests
Martinková, Patricia; Drabinová, Adéla; Liaw, Yuan-Ling; Sanders, Elizabeth A.; McFarland, Jenny L.; Price, Rebecca M. – CBE - Life Sciences Education, 2017
We provide a tutorial on differential item functioning (DIF) analysis, an analytic method useful for identifying potentially biased items in assessments. After explaining a number of methodological approaches, we test for gender bias in two scenarios that demonstrate why DIF analysis is crucial for developing assessments, particularly because…
Descriptors: Test Bias, Test Items, Gender Bias, Science Tests
Hartley, James – Psychology Teaching Review, 2017
In this article, Hartley notes the difficulties of using questionnaires to assess the efficiency of new instructional methods and highlights nine issues that researchers must consider. Hartley continues the discussion about the use of questionnaires and suggests that psychology teachers can help improve the teaching of psychology by drawing…
Descriptors: Questionnaires, Instructional Innovation, Instructional Effectiveness, Teaching Methods
Li, Feifei – ETS Research Report Series, 2017
An information-correction method for testlet-based tests is introduced. This method takes advantage of both generalizability theory (GT) and item response theory (IRT). The measurement error for the examinee proficiency parameter is often underestimated when a unidimensional conditional-independence IRT model is specified for a testlet dataset. By…
Descriptors: Item Response Theory, Generalizability Theory, Tests, Error of Measurement
Chen, Ping – Journal of Educational and Behavioral Statistics, 2017
Calibration of new items online has been an important topic in item replenishment for multidimensional computerized adaptive testing (MCAT). Several online calibration methods have been proposed for MCAT, such as multidimensional "one expectation-maximization (EM) cycle" (M-OEM) and multidimensional "multiple EM cycles"…
Descriptors: Test Items, Item Response Theory, Test Construction, Adaptive Testing
Taras, Maddalena; Davies, Mark S. – London Review of Education, 2017
This research focuses on the assessment literacy, that is, the understandings of assessment terminologies and how they relate to each other, in academic staff developers in the UK, collected via questionnaires and semi-structured interviews. Academic staff developers have been trained and certified to support new higher education lecturers in…
Descriptors: Higher Education, Beliefs, Vocabulary, Semi Structured Interviews
Geurten, Marie; Lloyd, Marianne; Willems, Sylvie – Child Development, 2017
Previous research has suggested that fluency does not influence memory decisions until ages 7-8. In two experiments (n = 96 and n = 64, respectively), children, aged 4, 6, and 8 years (Experiments 1 and 2), and adults (Experiment 2) studied a list of pictures. Participants completed a recognition test during which each study item was preceded by a…
Descriptors: Language Fluency, Young Children, Children, Memory
Walstad, William B.; Rebeck, Ken – Journal of Economic Education, 2017
The "Test of Financial Literacy" (TFL) was created to measure the financial knowledge of high school students. Its content is based on the standards and benchmarks stated in the "National Standards for Financial Literacy" (Council for Economic Education 2013). The test development process involved extensive item writing and…
Descriptors: Tests, Money Management, Literacy, High School Students
Kunnan, Antony John; Carr, Nathan – Language Testing in Asia, 2017
Background: This study examined the comparability of reading and writing tasks of two English language proficiency tests--the General English Proficiency Test-A (GEPT-A) developed by Language Training Center, Taipei and the Internet-Based Test of English as a Foreign Language (iBT) developed by Educational Testing Service, Princeton. Methods: Data…
Descriptors: Language Tests, English (Second Language), Language Proficiency, Foreign Countries
Su, Shiyang – ProQuest LLC, 2017
With the online assessment becoming mainstream and the recording of response times becoming straightforward, the importance of response times as a measure of psychological constructs has been recognized and the literature of modeling times has been growing during the last few decades. Previous studies have tried to formulate models and theories to…
Descriptors: Reading Comprehension, Item Response Theory, Models, Reaction Time
He, Wei; Diao, Qi; Hauser, Carl – Educational and Psychological Measurement, 2014
This study compared four item-selection procedures developed for use with severely constrained computerized adaptive tests (CATs). Severely constrained CATs refer to those adaptive tests that seek to meet a complex set of constraints that are often not conclusive to each other (i.e., an item may contribute to the satisfaction of several…
Descriptors: Comparative Analysis, Test Items, Selection, Computer Assisted Testing
Solano-Flores, Guillermo – Applied Measurement in Education, 2014
This article addresses validity and fairness in the testing of English language learners (ELLs)--students in the United States who are developing English as a second language. It discusses limitations of current approaches to examining the linguistic features of items and their effect on the performance of ELL students. The article submits that…
Descriptors: English Language Learners, Test Items, Probability, Test Bias
Shin, Sanggyu; Hashimoto, Hiroshi – International Association for Development of the Information Society, 2014
We describe a system that automatically assembles test questions from a set of examples. Our system can create test questions appropriate for each user's level at low cost. In particular, when a user review their lesson, our system provides new test questions which are assembled based on their previous test results and past mistakes, rather than a…
Descriptors: Test Items, Test Construction, Databases, Electronic Learning
Aksakalli, Ayhan; Turgut, Umit; Salar, Riza – Journal of Education and Practice, 2016
The purpose of this study is to investigate whether students are more successful on abstract or illustrated test questions. To this end, the questions on an abstract test were changed into a visual format, and these tests were administered every three days to a total of 240 students at six middle schools located in the Erzurum city center and…
Descriptors: Comparative Analysis, Scores, Middle School Students, Grade 8
Guo, Hongwen; Rios, Joseph A.; Haberman, Shelby; Liu, Ou Lydia; Wang, Jing; Paek, Insu – Applied Measurement in Education, 2016
Unmotivated test takers using rapid guessing in item responses can affect validity studies and teacher and institution performance evaluation negatively, making it critical to identify these test takers. The authors propose a new nonparametric method for finding response-time thresholds for flagging item responses that result from rapid-guessing…
Descriptors: Guessing (Tests), Reaction Time, Nonparametric Statistics, Models

Peer reviewed
Direct link
