NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 181 to 195 of 1,049 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Moyer, Eric L.; Galindo, Jennifer L.; Dodd, Barbara G. – Educational and Psychological Measurement, 2012
Managing test specifications--both multiple nonstatistical constraints and flexibly defined constraints--has become an important part of designing item selection procedures for computerized adaptive tests (CATs) in achievement testing. This study compared the effectiveness of three procedures: constrained CAT, flexible modified constrained CAT,…
Descriptors: Adaptive Testing, Computer Assisted Testing, Test Items, Item Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Khunkrai, Naruemon; Sawangboon, Tatsirin; Ketchatturat, Jatuphum – Educational Research and Reviews, 2015
The aim of this research is to study the accurate prediction of comparing test information and evaluation result by multidimensional computerized adaptive scholastic aptitude test program used for grade 9 students under different reviewing test conditions. Grade 9 students of the Secondary Educational Service Area Office in the North-east of…
Descriptors: Foreign Countries, Secondary School Students, Grade 9, Computer Assisted Testing
Lyons, Douglas; Niblock, Andrew W. – Independent School, 2014
Independent schools are, for the most part, exempt from mandatory participation in standardized tests designed for state and federal comparisons, nor are they required to take part in comparative international assessments. The anxiety in the broader culture, however, is driving a growing interest among independent school parents (and prospective…
Descriptors: Global Approach, Comparative Analysis, Comparative Education, Educational Practices
Zheng, Yi; Nozawa, Yuki; Gao, Xiaohong; Chang, Hua-Hua – ACT, Inc., 2012
Multistage adaptive tests (MSTs) have gained increasing popularity in recent years. MST is a balanced compromise between linear test forms (i.e., paper-and-pencil testing and computer-based testing) and traditional item-level computer-adaptive testing (CAT). It combines the advantages of both. On one hand, MST is adaptive (and therefore more…
Descriptors: Adaptive Testing, Heuristics, Accuracy, Item Banks
Peer reviewed Peer reviewed
Direct linkDirect link
Han, Kyung T. – Journal of Educational Measurement, 2012
Successful administration of computerized adaptive testing (CAT) programs in educational settings requires that test security and item exposure control issues be taken seriously. Developing an item selection algorithm that strikes the right balance between test precision and level of item pool utilization is the key to successful implementation…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Selection
Peer reviewed Peer reviewed
Direct linkDirect link
Choi, Seung W.; Grady, Matthew W.; Dodd, Barbara G. – Educational and Psychological Measurement, 2011
The goal of the current study was to introduce a new stopping rule for computerized adaptive testing (CAT). The predicted standard error reduction (PSER) stopping rule uses the predictive posterior variance to determine the reduction in standard error that would result from the administration of additional items. The performance of the PSER was…
Descriptors: Item Banks, Adaptive Testing, Computer Assisted Testing, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Farrow, Robert; Pitt, Rebecca; de los Arcos, Beatriz; Perryman, Leigh-Anne; Weller, Martin; McAndrew, Patrick – British Journal of Educational Technology, 2015
The true power of comparative research around the impact and use of open educational resources is only just being realised, largely through the work done by the Hewlett-funded OER Research Hub, based at The Open University (UK). Since late 2012, the project has used a combination of surveys, interviews and focus groups to gather data about the use…
Descriptors: Educational Resources, Open Source Technology, Surveys, Interviews
Peer reviewed Peer reviewed
Direct linkDirect link
Chen, Xinnian; Graesser, Donnasue; Sah, Megha – Advances in Physiology Education, 2015
Laboratory courses serve as important gateways to science, technology, engineering, and mathematics education. One of the challenges in assessing laboratory learning is to conduct meaningful and standardized practical exams, especially for large multisection laboratory courses. Laboratory practical exams in life sciences courses are frequently…
Descriptors: Laboratory Experiments, Standardized Tests, Testing Programs, Testing Problems
Peer reviewed Peer reviewed
Direct linkDirect link
Crotts, Katrina; Sireci, Stephen G.; Zenisky, April – Journal of Applied Testing Technology, 2012
Validity evidence based on test content is important for educational tests to demonstrate the degree to which they fulfill their purposes. Most content validity studies involve subject matter experts (SMEs) who rate items that comprise a test form. In computerized-adaptive testing, examinees take different sets of items and test "forms"…
Descriptors: Computer Assisted Testing, Adaptive Testing, Content Validity, Test Content
Peer reviewed Peer reviewed
Direct linkDirect link
McAllister, Daniel; Guidice, Rebecca M. – Teaching in Higher Education, 2012
The primary goal of teaching is to successfully facilitate learning. Testing can help accomplish this goal in two ways. First, testing can provide a powerful motivation for students to prepare when they perceive that the effort involved leads to valued outcomes. Second, testing can provide instructors with valuable feedback on whether their…
Descriptors: Testing, Role, Student Motivation, Feedback (Response)
Peer reviewed Peer reviewed
Direct linkDirect link
van der Linden, Wim J.; Diao, Qi – Journal of Educational Measurement, 2011
In automated test assembly (ATA), the methodology of mixed-integer programming is used to select test items from an item bank to meet the specifications for a desired test form and optimize its measurement accuracy. The same methodology can be used to automate the formatting of the set of selected items into the actual test form. Three different…
Descriptors: Test Items, Test Format, Test Construction, Item Banks
Peer reviewed Peer reviewed
Direct linkDirect link
Sadler, Philip M.; Coyle, Harold; Cook Smith, Nancy; Miller, Jaimie; Mintzes, Joel; Tanner, Kimberly; Murray, John – CBE - Life Sciences Education, 2013
We report on the development of an item test bank and associated instruments based on the National Research Council (NRC) K-8 life sciences content standards. Utilizing hundreds of studies in the science education research literature on student misconceptions, we constructed 476 unique multiple-choice items that measure the degree to which test…
Descriptors: National Standards, Knowledge Level, Biological Sciences, Item Banks
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Michael, Timothy B.; Williams, Melissa A. – Administrative Issues Journal: Education, Practice, and Research, 2013
As online programs at conventional universities continue to expand, administrators and faculty face new challenges. Academic dishonesty is nothing new, but an online testing environment requires different strategies and tactics from what we have had to consider in the past. Our university has recently adapted successful face-to-face programs in…
Descriptors: Cheating, Online Courses, Ethics, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Yonker, Julie E. – Assessment & Evaluation in Higher Education, 2011
With the advent of online test banks and large introductory classes, instructors have often turned to textbook publisher-generated multiple-choice question (MCQ) exams in their courses. Multiple-choice questions are often divided into categories of factual or applied, thereby implicating levels of cognitive processing. This investigation examined…
Descriptors: Multiple Choice Tests, Item Banks, Introductory Courses, Cognitive Processes
Peer reviewed Peer reviewed
Direct linkDirect link
Songmuang, Pokpong; Ueno, Maomi – IEEE Transactions on Learning Technologies, 2011
The purpose of this research is to automatically construct multiple equivalent test forms that have equivalent qualities indicated by test information functions based on item response theory. There has been a trade-off in previous studies between the computational costs and the equivalent qualities of test forms. To alleviate this problem, we…
Descriptors: Item Response Theory, Mathematics, Test Construction, Test Format
Pages: 1  |  ...  |  9  |  10  |  11  |  12  |  13  |  14  |  15  |  16  |  17  |  ...  |  70