NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 23 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Kershree Padayachee; M. Matimolane – Teaching in Higher Education, 2025
In the shift to Emergency Remote Teaching and Learning (ERT&L) during the COVID-19 pandemic, remote assessment and feedback became a major source of discontent and challenge for students and staff. This paper is a reflection and analysis of assessment practices during ERT&L, and our theorisation of the possibilities for shifts towards…
Descriptors: Educational Quality, Social Justice, Distance Education, Feedback (Response)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Allehaiby, Wid Hasen; Al-Bahlani, Sara – Arab World English Journal, 2021
One of the main challenges higher educational institutions encounter amid the recent COVID-19 crisis is transferring assessment approaches from the traditional face-to-face form to the online Emergency Remote Teaching approach. A set of language assessment principles, practicality, reliability, validity, authenticity, and washback, which can be…
Descriptors: Barriers, Distance Education, Evaluation Methods, Teaching Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Fisteus, Jesus Arias; Pardo, Abelardo; García, Norberto Fernández – Journal of Science Education and Technology, 2013
Although technology for automatic grading of multiple choice exams has existed for several decades, it is not yet as widely available or affordable as it should be. The main reasons preventing this adoption are the cost and the complexity of the setup procedures. In this paper, "Eyegrade," a system for automatic grading of multiple…
Descriptors: Multiple Choice Tests, Grading, Computer Assisted Testing, Man Machine Systems
Peer reviewed Peer reviewed
Direct linkDirect link
Massey, Chris L.; Gambrell, Linda B. – Literacy Research and Instruction, 2014
Literacy educators and researchers have long recognized the importance of increasing students' writing proficiency across age and grade levels. With the release of the Common Core State Standards (CCSS), a new and greater emphasis is being placed on writing in the K-12 curriculum. Educators, as well as the authors of the CCSS, agree that…
Descriptors: Writing Evaluation, State Standards, Instructional Effectiveness, Writing Ability
Peer reviewed Peer reviewed
Direct linkDirect link
Irwin, Brian; Hepplestone, Stuart – Assessment & Evaluation in Higher Education, 2012
There have been calls in the literature for changes to assessment practices in higher education, to increase flexibility and give learners more control over the assessment process. This article explores the possibilities of allowing student choice in the format used to present their work, as a starting point for changing assessment, based on…
Descriptors: Student Evaluation, College Students, Selection, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Brown, Gavin T. L. – Higher Education Quarterly, 2010
The use of timed, essay examinations is a well-established means of evaluating student learning in higher education. The reliability of essay scoring is highly problematic and it appears that essay examination grades are highly dependent on language and organisational components of writing. Computer-assisted scoring of essays makes use of language…
Descriptors: Higher Education, Essay Tests, Validity, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Thomas, Michael L. – Assessment, 2011
Item response theory (IRT) and related latent variable models represent modern psychometric theory, the successor to classical test theory in psychological assessment. Although IRT has become prevalent in the measurement of ability and achievement, its contributions to clinical domains have been less extensive. Applications of IRT to clinical…
Descriptors: Item Response Theory, Psychological Evaluation, Reliability, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Steinmetz, Jean-Paul; Brunner, Martin; Loarer, Even; Houssemand, Claude – Psychological Assessment, 2010
The Wisconsin Card Sorting Test (WCST) assesses executive and frontal lobe function and can be administered manually or by computer. Despite the widespread application of the 2 versions, the psychometric equivalence of their scores has rarely been evaluated and only a limited set of criteria has been considered. The present experimental study (N =…
Descriptors: Computer Assisted Testing, Psychometrics, Test Theory, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Kaya, Fatih; Delen, Erhan; Ritter, Nicola L. – Journal of Psychoeducational Assessment, 2012
This article presents a review of the Children's Organizational Skills Scales (COSS) which were designed to assess how children organize their time, materials, and actions to accomplish important tasks at home and school. The scale quantifies children's skills in organization, time management, and planning (OTMP). The COSS is a multi-informant…
Descriptors: Measures (Individuals), Children, Organization, Task Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Chang, Shu-Ren; Plake, Barbara S.; Kramer, Gene A.; Lien, Shu-Mei – Educational and Psychological Measurement, 2011
This study examined the amount of time that different ability-level examinees spend on questions they answer correctly or incorrectly across different pretest item blocks presented on a fixed-length, time-restricted computerized adaptive testing (CAT). Results indicate that different ability-level examinees require different amounts of time to…
Descriptors: Evidence, Test Items, Reaction Time, Adaptive Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Schmitt, T. A.; Sass, D. A.; Sullivan, J. R.; Walker, C. M. – International Journal of Testing, 2010
Imposed time limits on computer adaptive tests (CATs) can result in examinees having difficulty completing all items, thus compromising the validity and reliability of ability estimates. In this study, the effects of speededness were explored in a simulated CAT environment by varying examinee response patterns to end-of-test items. Expectedly,…
Descriptors: Monte Carlo Methods, Simulation, Computer Assisted Testing, Adaptive Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Yong-Won; Gentile, Claudia; Kantor, Robert – Applied Linguistics, 2010
The main purpose of the study was to investigate the distinctness and reliability of analytic (or multi-trait) rating dimensions and their relationships to holistic scores and "e-rater"[R] essay feature variables in the context of the TOEFL[R] computer-based test (TOEFL CBT) writing assessment. Data analyzed in the study were holistic…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Essays
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Hohlfeld, Tina N.; Ritzhaupt, Albert D.; Barron, Ann E. – Journal of Research on Technology in Education, 2010
This article provides an overview of the development and validation of the Student Tool for Technology Literacy (ST[superscript 2]L). Developing valid and reliable objective performance measures for monitoring technology literacy is important to all organizations charged with equipping students with the technology skills needed to successfully…
Descriptors: Test Validity, Ability Grouping, Grade 8, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Ullstadius, Eva; Carlstedt, Berit; Gustafsson, Jan-Eric – International Journal of Testing, 2008
The influence of general and verbal ability on each of 72 verbal analogy test items were investigated with new factor analytical techniques. The analogy items together with the Computerized Swedish Enlistment Battery (CAT-SEB) were given randomly to two samples of 18-year-old male conscripts (n = 8566 and n = 5289). Thirty-two of the 72 items had…
Descriptors: Test Items, Verbal Ability, Factor Analysis, Swedish
Lai, Cheng-Fei; Nese, Joseph F. T.; Jamgochian, Elisa M.; Alonzo, Julie; Tindal, Gerald – Behavioral Research and Teaching, 2010
In this technical report, we provide the results of a series of studies on the technical adequacy of the early reading measures available on the easyCBM[R] assessment system. The results from the two-level hierarchical linear growth model analyses suggest that the reliability of the slope estimates for the easyCBM[R] reading measures are strong,…
Descriptors: Kindergarten, Grade 1, Early Reading, Reading Tests
Previous Page | Next Page »
Pages: 1  |  2