Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 10 |
Descriptor
Source
Author
Al-A'ali, Mansoor | 1 |
Banerjee, Manju | 1 |
Behrens, John T. | 1 |
Chu, Hui-Chun | 1 |
Chun, Euljung | 1 |
DiCerbo, Kristen E. | 1 |
Dikli, Semire | 1 |
Dolan, Robert P. | 1 |
Dwyer, Francis | 1 |
Gifford, Bernard | 1 |
Hall, Tracey E. | 1 |
More ▼ |
Publication Type
Journal Articles | 8 |
Reports - Descriptive | 5 |
Reports - Research | 4 |
Reports - Evaluative | 2 |
Dissertations/Theses -… | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Elementary Secondary Education | 12 |
Higher Education | 4 |
Postsecondary Education | 4 |
Elementary Education | 2 |
Secondary Education | 2 |
Grade 3 | 1 |
Grade 4 | 1 |
High Schools | 1 |
Audience
Location
China | 1 |
Taiwan | 1 |
United Kingdom | 1 |
Laws, Policies, & Programs
Individuals with Disabilities… | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Koon, Sharon – ProQuest LLC, 2010
This study examined the effectiveness of the odds-ratio method (Penfield, 2008) and the multinomial logistic regression method (Kato, Moen, & Thurlow, 2009) for measuring differential distractor functioning (DDF) effects in comparison to the standardized distractor analysis approach (Schmitt & Bleistein, 1987). Students classified as participating…
Descriptors: Test Bias, Test Items, Reference Groups, Lunch Programs
Wang, Shudong; Jiao, Hong; Jin, Ying; Thum, Yeow Meng – Online Submission, 2010
The vertical scales of large-scale achievement tests created by using item response theory (IRT) models are mostly based on cluster (or correlated) educational data in which students usually are clustered in certain groups or settings (classrooms or schools). While such application directly violated assumption of independent sample of person in…
Descriptors: Scaling, Achievement Tests, Data Analysis, Item Response Theory
Behrens, John T.; Mislevy, Robert J.; DiCerbo, Kristen E.; Levy, Roy – National Center for Research on Evaluation, Standards, and Student Testing (CRESST), 2010
The world in which learning and assessment must take place is rapidly changing. The digital revolution has created a vast space of interconnected information, communication, and interaction. Functioning effectively in this environment requires so-called 21st century skills such as technological fluency, complex problem solving, and the ability to…
Descriptors: Evidence, Student Evaluation, Educational Assessment, Influence of Technology
Chu, Hui-Chun; Hwang, Gwo-Jen; Huang, Yueh-Min – Innovations in Education and Teaching International, 2010
Conventional testing systems usually give students a score as their test result, but do not show them how to improve their learning performance. Researchers have indicated that students would benefit more if individual learning guidance could be provided. However, most of the existing learning diagnosis models ignore the fact that one concept…
Descriptors: Test Results, Teaching Methods, Elementary School Students, Elementary School Teachers
Johnson, Martin; Nadas, Rita – Learning, Media and Technology, 2009
Within large scale educational assessment agencies in the UK, there has been a shift towards assessors marking digitally scanned copies rather than the original paper scripts that were traditionally used. This project uses extended essay examination scripts to consider whether the mode in which an essay is read potentially influences the…
Descriptors: Reading Comprehension, Educational Assessment, Internet, Essay Tests
Al-A'ali, Mansoor – Educational Technology & Society, 2007
Computer adaptive testing is the study of scoring tests and questions based on assumptions concerning the mathematical relationship between examinees' ability and the examinees' responses. Adaptive student tests, which are based on item response theory (IRT), have many advantages over conventional tests. We use the least square method, a…
Descriptors: Educational Testing, Higher Education, Elementary Secondary Education, Student Evaluation
Scalise, Kathleen; Gifford, Bernard – Journal of Technology, Learning, and Assessment, 2006
Technology today offers many new opportunities for innovation in educational assessment through rich new assessment tasks and potentially powerful scoring, reporting and real-time feedback mechanisms. One potential limitation for realizing the benefits of computer-based assessment in both instructional assessment and large scale testing comes in…
Descriptors: Electronic Learning, Educational Assessment, Information Technology, Classification
Dikli, Semire – Journal of Technology, Learning, and Assessment, 2006
Automated Essay Scoring (AES) is defined as the computer technology that evaluates and scores the written prose (Shermis & Barrera, 2002; Shermis & Burstein, 2003; Shermis, Raymat, & Barrera, 2003). AES systems are mainly used to overcome time, cost, reliability, and generalizability issues in writing assessment (Bereiter, 2003; Burstein,…
Descriptors: Scoring, Writing Evaluation, Writing Tests, Standardized Tests
Lin, Hong; Dwyer, Francis – TechTrends: Linking Research and Practice to Improve Learning, 2006
According to the Committee on the Foundations of Educational Assessment, traditional educational assessment does a reasonable job of measuring knowledge of basic facts, procedures and proficiency of an area of the curriculum. However, the traditional approach fails to capture the breadth and richness of knowledge and cognition. Such a concern…
Descriptors: Computer Assisted Testing, Educational Assessment, Computers, Thinking Skills
Educational Testing Service, 2006
Innovations, ETS's corporate magazine, provides information on educational assessment for educators, school leaders, researchers and policymakers around the world. Each issue of Innovations focuses on a particular theme in assessment. This issue reports on how new technologies in classrooms around the world are enhancing teaching, learning and…
Descriptors: Foreign Countries, Educational Assessment, Writing Evaluation, Periodicals
Ketterlin-Geller, Leanne R. – Journal of Technology, Learning, and Assessment, 2005
Universal design for assessment (UDA) is intended to increase participation of students with disabilities and English-language learners in general education assessments by addressing student needs through customized testing platforms. Computer-based testing provides an optimal format for creating individually-tailored tests. However, although a…
Descriptors: Student Needs, Disabilities, Grade 3, Second Language Learning
Dolan, Robert P.; Hall, Tracey E.; Banerjee, Manju; Chun, Euljung; Strangman, Nicole – Journal of Technology, Learning, and Assessment, 2005
Standards-based reform efforts are highly dependent on accurate assessment of all students, including those with disabilities. The accuracy of current large-scale assessments is undermined by construct-irrelevant factors including access barriers, a particular problem for students with disabilities. Testing accommodations such as the read-aloud…
Descriptors: United States History, Testing Accommodations, Test Content, Learning Disabilities