NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)5
Since 2006 (last 20 years)6
Audience
Researchers2
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 10 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Cappaert, Kevin J.; Wen, Yao; Chang, Yu-Feng – Measurement: Interdisciplinary Research and Perspectives, 2018
Events such as curriculum changes or practice effects can lead to item parameter drift (IPD) in computer adaptive testing (CAT). The current investigation introduced a point- and weight-adjusted D[superscript 2] method for IPD detection for use in a CAT environment when items are suspected of drifting across test administrations. Type I error and…
Descriptors: Adaptive Testing, Computer Assisted Testing, Test Items, Identification
Fraillon, Julian, Ed.; Ainley, John, Ed.; Schulz, Wolfram, Ed.; Friedman, Tim, Ed.; Duckworth, Daniel, Ed. – International Association for the Evaluation of Educational Achievement, 2020
IEA's International Computer and Information Literacy Study (ICILS) 2018 investigated how well students are prepared for study, work, and life in a digital world. ICILS 2018 measured international differences in students' computer and information literacy (CIL): their ability to use computers to investigate, create, participate, and communicate at…
Descriptors: International Assessment, Computer Literacy, Information Literacy, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Rausch, Andreas; Seifried, Juergen; Koegler, Kristina; Brandt, Steffen; Eigenmann, Rebecca; Siegfried, Christin – AERA Online Paper Repository, 2016
Although non-cognitive facets--such as interest, attitudes, commitment, self-concept and so on--of are prevalent in contemporary theoretical modeling of competence, they are often neglected in measurement approaches or measured only by global self-report questionnaires. Based on the well-established experience sampling method (ESM) and following…
Descriptors: Computer Assisted Testing, Problem Solving, Measurement, Sampling
Martin, Michael O., Ed.; Mullis, Ina V. S., Ed.; Hooper, Martin, Ed. – International Association for the Evaluation of Educational Achievement, 2017
"Methods and Procedures in PIRLS 2016" documents the development of the Progress in International Reading Literacy Study (PIRLS) assessments and questionnaires and describes the methods used in sampling, translation verification, data collection, database construction, and the construction of the achievement and context questionnaire…
Descriptors: Foreign Countries, Achievement Tests, Grade 4, International Assessment
Wagemaker, Hans, Ed. – International Association for the Evaluation of Educational Achievement, 2020
Although International Association for the Evaluation of Educational Achievement-pioneered international large-scale assessment (ILSA) of education is now a well-established science, non-practitioners and many users often substantially misunderstand how large-scale assessments are conducted, what questions and challenges they are designed to…
Descriptors: International Assessment, Achievement Tests, Educational Assessment, Comparative Analysis
OECD Publishing, 2013
The Programme for the International Assessment of Adult Competencies (PIAAC) has been planned as an ongoing program of assessment. The first cycle of the assessment has involved two "rounds." The first round, which is covered by this report, took place over the period of January 2008-October 2013. The main features of the first cycle of…
Descriptors: International Assessment, Adults, Skills, Test Construction
Peay, Edmund R. – 1982
The method for questionnaire construction described in this paper makes it convenient to generate as many different forms for a questionnaire as there are respondents. The method is based on using the computer to produce the questionnaire forms themselves. In this way the items or subgroups of items of the questionnaire may be randomly ordered or…
Descriptors: Computer Assisted Testing, Computer Software, Questionnaires, Sampling
Nitko, Anthony J.; Hsu, Tse-chi – 1984
Item analysis procedures appropriate for domain-referenced classroom testing are described. A conceptual framework within which item statistics can be considered and promising statistics in light of this framework are presented. The sampling fluctuations of the more promising item statistics for sample sizes comparable to the typical classroom…
Descriptors: Computer Assisted Testing, Criterion Referenced Tests, Item Analysis, Microcomputers
Ree, Malcom James; Jensen, Harald E. – 1980
By means of computer simulation of test responses, the reliability of item analysis data and the accuracy of equating were examined for hypothetical samples of 250, 500, 1000, and 2000 subjects for two tests with 20 equating items plus 60 additional items on the same scale. Birnbaum's three-parameter logistic model was used for the simulation. The…
Descriptors: Computer Assisted Testing, Equated Scores, Error of Measurement, Item Analysis
Allen, Nancy L.; Donoghue, John R. – 1995
This Monte Carlo study examined the effect of complex sampling of items on the measurement of differential item functioning (DIF) using the Mantel-Haenszel procedure. Data were generated using a three-parameter logistic item response theory model according to the balanced incomplete block (BIB) design used in the National Assessment of Educational…
Descriptors: Computer Assisted Testing, Difficulty Level, Elementary Secondary Education, Identification