NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)4
Since 2006 (last 20 years)12
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Showing 1 to 15 of 18 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Friyatmi; Mardapi, Djemari; Haryanto; Rahmi, Elvi – European Journal of Educational Research, 2020
The advancement of information and technology resulted in the change in conventional test methods. The weaknesses of the paper-based test can be minimized using the computer-based test (CBT). The development of a CBT desperately needs a computerized item bank. This study aimed to develop a computerized item bank for classroom and school-based…
Descriptors: Computer Assisted Testing, Item Banks, High School Students, Foreign Countries
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kosan, Aysen Melek Aytug; Koç, Nizamettin; Elhan, Atilla Halil; Öztuna, Derya – International Journal of Assessment Tools in Education, 2019
Progress Test (PT) is a form of assessment that simultaneously measures ability levels of all students in a certain educational program and their progress over time by providing them with same questions and repeating the process at regular intervals with parallel tests. Our objective was to generate an item bank for the PT and to examine the…
Descriptors: Item Banks, Adaptive Testing, Computer Assisted Testing, Medical Education
Peer reviewed Peer reviewed
Direct linkDirect link
Sun, Bo; Zhu, Yunzong; Xiao, Yongkang; Xiao, Rong; Wei, Yungang – IEEE Transactions on Learning Technologies, 2019
In recent years, computerized adaptive testing (CAT) has gained popularity as an important means to evaluate students' ability. Assigning tags to test questions is crucial in CAT. Manual tagging is widely used for constructing question banks; however, this approach is time-consuming and might lead to consistency issues. Automatic question tagging,…
Descriptors: Computer Assisted Testing, Student Evaluation, Test Items, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Conejo, Ricardo; Guzmán, Eduardo; Trella, Monica – International Journal of Artificial Intelligence in Education, 2016
This article describes the evolution and current state of the domain-independent Siette assessment environment. Siette supports different assessment methods--including classical test theory, item response theory, and computer adaptive testing--and integrates them with multidimensional student models used by intelligent educational systems.…
Descriptors: Automation, Student Evaluation, Intelligent Tutoring Systems, Item Banks
He, Wei – ProQuest LLC, 2010
Item pool quality has been regarded as one important factor to help realize enhanced measurement quality for the computerized adaptive test (CAT) (e.g., Flaugher, 2000; Jensema, 1977; McBride & Wise, 1976; Reckase, 1976; 2003; van der Linden, Ariel, & Veldkamp, 2006; Veldkamp & van der Linden, 2000; Xing & Hambleton, 2004). However, studies are…
Descriptors: Test Items, Computer Assisted Testing, Item Analysis, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Chatzopoulou, D. I.; Economides, A. A. – Journal of Computer Assisted Learning, 2010
This paper presents Programming Adaptive Testing (PAT), a Web-based adaptive testing system for assessing students' programming knowledge. PAT was used in two high school programming classes by 73 students. The question bank of PAT is composed of 443 questions. A question is classified in one out of three difficulty levels. In PAT, the levels of…
Descriptors: Student Evaluation, Prior Learning, Programming, High School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Dermo, John – British Journal of Educational Technology, 2009
This paper describes a piece of research carried out at the University of Bradford into student perceptions of e-assessment. An online questionnaire was delivered to 130 undergraduates who had taken part in online assessment (either formative or summative) during the academic year 2007-2008. The survey looked at six main dimensions: (1) affective…
Descriptors: Student Attitudes, Student Surveys, Undergraduate Students, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Glas, Cees A. W.; Geerlings, Hanneke – Studies in Educational Evaluation, 2009
Pupil monitoring systems support the teacher in tailoring teaching to the individual level of a student and in comparing the progress and results of teaching with national standards. The systems are based on the availability of an item bank calibrated using item response theory. The assessment of the students' progress and results can be further…
Descriptors: Item Banks, Adaptive Testing, National Standards, Psychometrics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sapriati, Amalia; Zuhairi, Aminudin – Turkish Online Journal of Distance Education, 2010
This paper addresses the use of computer-based testing in distance education, based on the experience of Universitas Terbuka (UT), Indonesia. Computer-based testing has been developed at UT for reasons of meeting the specific needs of distance students as the following: (1) students' inability to sit for the scheduled test; (2) conflicting test…
Descriptors: Alternative Assessment, Distance Education, Computer Assisted Testing, Computer System Design
Peer reviewed Peer reviewed
Direct linkDirect link
Koong, Chorng-Shiuh; Wu, Chi-Ying – Computers & Education, 2010
Multiple intelligences, with its hypothesis and implementation, have ascended to a prominent status among the many instructional methodologies. Meanwhile, pedagogical theories and concepts are in need of more alternative and interactive assessments to prove their prevalence (Kinugasa, Yamashita, Hayashi, Tominaga, & Yamasaki, 2005). In general,…
Descriptors: Multiple Intelligences, Test Items, Grading, Programming
Peer reviewed Peer reviewed
Direct linkDirect link
Yin, Peng-Yeng; Chang, Kuang-Cheng; Hwang, Gwo-Jen; Hwang, Gwo-Haur; Chan, Ying – Educational Technology & Society, 2006
To accurately analyze the problems of students in learning, the composed test sheets must meet multiple assessment criteria, such as the ratio of relevant concepts to be evaluated, the average discrimination degree, difficulty degree and estimated testing time. Furthermore, to precisely evaluate the improvement of student's learning performance…
Descriptors: Student Evaluation, Performance Based Assessment, Test Construction, Computer Assisted Testing
Peer reviewed Peer reviewed
Blunt, Adrian; Dent, Beverley – Canadian Journal for the Study of Adult Education, 1999
Item banks and computer-generated tests in a Canadian community college's computerized testing system were evaluated and student and faculty attitudes surveyed. Item banks were found inadequate, tests addressed low-level learning outcomes, and the system's location in centralized administration gave instructional staff little ownership or…
Descriptors: Community Colleges, Computer Assisted Testing, Foreign Countries, Item Banks
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Yuan H.; Schafer, William D. – Applied Psychological Measurement, 2005
Under a multidimensional item response theory (MIRT) computerized adaptive testing (CAT) testing scenario, a trait estimate (theta) in one dimension will provide clues for subsequently seeking a solution in other dimensions. This feature may enhance the efficiency of MIRT CAT's item selection and its scoring algorithms compared with its…
Descriptors: Adaptive Testing, Item Banks, Computation, Psychological Studies
Mitchell, Alison C. – Programmed Learning and Educational Technology, 1982
Describes a Scottish project--"School-based assessment using item banking"--investigating the feasibility of producing computer-based test marking and reporting facilities. Teachers would construct their own tests from item banks and carry out diagnostic assessment and criterion-referenced measurement to evaluate students with SCRIBE, a…
Descriptors: Academic Achievement, Computer Assisted Testing, Computer Programs, Criterion Referenced Tests
Fuhs, F. Paul – 1980
The function and structure of a data base system called RIBYT (Review It Before You Test) is described. RIBYT simultaneously controls and associates questions in question pools for many courses of instruction. The data base stores questions created by both faculty and students and is used for formal testing and student self-assessment. The…
Descriptors: Computer Assisted Testing, Computer Programs, Data Collection, Databases
Previous Page | Next Page »
Pages: 1  |  2