NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 361 to 375 of 1,057 results Save | Export
He, Wei; Diao, Qi; Hauser, Carl – Online Submission, 2013
This study compares the four existing procedures handling the item selection in severely constrained computerized adaptive tests (CAT). These procedures include weighted deviation model (WDM), weighted penalty model (WPM), maximum priority index (MPI), and shadow test approach (STA). Severely constrained CAT refer to those adaptive tests seeking…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Item Banks
Peer reviewed Peer reviewed
Direct linkDirect link
Huebner, Alan; Li, Zhushan – Applied Psychological Measurement, 2012
Computerized classification tests (CCTs) classify examinees into categories such as pass/fail, master/nonmaster, and so on. This article proposes the use of stochastic methods from sequential analysis to address item overexposure, a practical concern in operational CCTs. Item overexposure is traditionally dealt with in CCTs by the Sympson-Hetter…
Descriptors: Computer Assisted Testing, Classification, Statistical Analysis, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Ihme, Jan Marten; Senkbeil, Martin; Goldhammer, Frank; Gerick, Julia – European Educational Research Journal, 2017
The combination of different item formats is found quite often in large scale assessments, and analyses on the dimensionality often indicate multi-dimensionality of tests regarding the task format. In ICILS 2013, three different item types (information-based response tasks, simulation tasks, and authoring tasks) were used to measure computer and…
Descriptors: Foreign Countries, Computer Literacy, Information Literacy, International Assessment
Martin, Michael O., Ed.; Mullis, Ina V. S., Ed.; Hooper, Martin, Ed. – International Association for the Evaluation of Educational Achievement, 2017
"Methods and Procedures in PIRLS 2016" documents the development of the Progress in International Reading Literacy Study (PIRLS) assessments and questionnaires and describes the methods used in sampling, translation verification, data collection, database construction, and the construction of the achievement and context questionnaire…
Descriptors: Foreign Countries, Achievement Tests, Grade 4, International Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
DeBoer, George E.; Quellmalz, Edys S.; Davenport, Jodi L.; Timms, Michael J.; Herrmann-Abell, Cari F.; Buckley, Barbara C.; Jordan, Kevin A.; Huang, Chun-Wei; Flanagan, Jean C. – Journal of Research in Science Teaching, 2014
Online testing holds much promise for assessing students' complex science knowledge and inquiry skills. In the current study, we examined the comparative effectiveness of assessment tasks and test items presented in online modules that used either a static, active, or interactive modality. A total of 1,836 students from the classrooms of 22 middle…
Descriptors: Computer Assisted Testing, Test Items, Interaction, Middle School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Chun – Journal of Educational and Behavioral Statistics, 2014
Many latent traits in social sciences display a hierarchical structure, such as intelligence, cognitive ability, or personality. Usually a second-order factor is linearly related to a group of first-order factors (also called domain abilities in cognitive ability measures), and the first-order factors directly govern the actual item responses.…
Descriptors: Measurement, Accuracy, Item Response Theory, Adaptive Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Han, Kyung T.; Guo, Fanmin – Practical Assessment, Research & Evaluation, 2014
The full-information maximum likelihood (FIML) method makes it possible to estimate and analyze structural equation models (SEM) even when data are partially missing, enabling incomplete data to contribute to model estimation. The cornerstone of FIML is the missing-at-random (MAR) assumption. In (unidimensional) computerized adaptive testing…
Descriptors: Maximum Likelihood Statistics, Structural Equation Models, Data, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Wei, Hua; Lin, Jie – International Journal of Testing, 2015
Out-of-level testing refers to the practice of assessing a student with a test that is intended for students at a higher or lower grade level. Although the appropriateness of out-of-level testing for accountability purposes has been questioned by educators and policymakers, incorporating out-of-level items in formative assessments for accurate…
Descriptors: Test Items, Computer Assisted Testing, Adaptive Testing, Instructional Program Divisions
Wagemaker, Hans, Ed. – International Association for the Evaluation of Educational Achievement, 2020
Although International Association for the Evaluation of Educational Achievement-pioneered international large-scale assessment (ILSA) of education is now a well-established science, non-practitioners and many users often substantially misunderstand how large-scale assessments are conducted, what questions and challenges they are designed to…
Descriptors: International Assessment, Achievement Tests, Educational Assessment, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Leroux, Audrey J.; Lopez, Myriam; Hembry, Ian; Dodd, Barbara G. – Educational and Psychological Measurement, 2013
This study compares the progressive-restricted standard error (PR-SE) exposure control procedure to three commonly used procedures in computerized adaptive testing, the randomesque, Sympson-Hetter (SH), and no exposure control methods. The performance of these four procedures is evaluated using the three-parameter logistic model under the…
Descriptors: Computer Assisted Testing, Adaptive Testing, Comparative Analysis, Statistical Analysis
Zhou, Xuechun – ProQuest LLC, 2012
Current CAT applications consist of predominantly dichotomous items, and CATs with polytomously scored items are limited. To ascertain the best approach to polytomous CAT, a significant amount of research has been conducted on item selection, ability estimation, and impact of termination rules based on polytomous IRT models. Few studies…
Descriptors: Item Banks, Computer Assisted Testing, Adaptive Testing, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Huebner, Alan – Practical Assessment, Research & Evaluation, 2012
Computerized classification tests (CCTs) often use sequential item selection which administers items according to maximizing psychometric information at a cut point demarcating passing and failing scores. This paper illustrates why this method of item selection leads to the overexposure of a significant number of items, and the performances of…
Descriptors: Computer Assisted Testing, Classification, Test Items, Sequential Approach
Peer reviewed Peer reviewed
Direct linkDirect link
Kaspar, Roman; Döring, Ottmar; Wittmann, Eveline; Hartig, Johannes; Weyland, Ulrike; Nauerth, Annette; Möllers, Michaela; Rechenbach, Simone; Simon, Julia; Worofka, Iberé – Vocations and Learning, 2016
Valid and reliable standardized assessment of nursing competencies is needed to monitor the quality of vocational education and training (VET) in nursing and evaluate learning outcomes for care work trainees with increasingly heterogeneous learning backgrounds. To date, however, the modeling of professional competencies has not yet evolved into…
Descriptors: Nursing Education, Geriatrics, Video Technology, Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Golovachyova, Viktoriya N.; Menlibekova, Gulbakhyt Zh.; Abayeva, Nella F.; Ten, Tatyana L.; Kogaya, Galina D. – International Journal of Environmental and Science Education, 2016
Using computer-based monitoring systems that rely on tests could be the most effective way of knowledge evaluation. The problem of objective knowledge assessment by means of testing takes on a new dimension in the context of new paradigms in education. The analysis of the existing test methods enabled us to conclude that tests with selected…
Descriptors: Expertise, Computer Assisted Testing, Student Evaluation, Knowledge Level
Liu, Junhui; Brown, Terran; Chen, Jianshen; Ali, Usama; Hou, Likun; Costanzo, Kate – Partnership for Assessment of Readiness for College and Careers, 2016
The Partnership for Assessment of Readiness for College and Careers (PARCC) is a state-led consortium working to develop next-generation assessments that more accurately, compared to previous assessments, measure student progress toward college and career readiness. The PARCC assessments include both English Language Arts/Literacy (ELA/L) and…
Descriptors: Testing, Achievement Tests, Test Items, Test Bias
Pages: 1  |  ...  |  21  |  22  |  23  |  24  |  25  |  26  |  27  |  28  |  29  |  ...  |  71