Publication Date
In 2025 | 0 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 5 |
Since 2016 (last 10 years) | 17 |
Since 2006 (last 20 years) | 27 |
Descriptor
Source
Author
Ackerman, Debra J. | 2 |
Cory, Charles H. | 2 |
Adair-Hauck, Bonnie | 1 |
Alderson, J. Charles | 1 |
Amit Sevak | 1 |
Amy K. Clark | 1 |
Anderson, Mark H. | 1 |
Anderson, Paul S. | 1 |
Anivan, Sarinee, Ed. | 1 |
Ardoin, Scott P. | 1 |
Becker, Kirk A. | 1 |
More ▼ |
Publication Type
Education Level
Audience
Administrators | 1 |
Teachers | 1 |
Location
North Carolina | 2 |
Pennsylvania | 2 |
Australia | 1 |
Canada | 1 |
China | 1 |
Delaware | 1 |
Ghana | 1 |
Hungary | 1 |
Illinois | 1 |
Israel (Tel Aviv) | 1 |
Japan | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
Test of English as a Foreign… | 2 |
Armed Services Vocational… | 1 |
International English… | 1 |
Iowa Tests of Basic Skills | 1 |
Minnesota Multiphasic… | 1 |
State Trait Anxiety Inventory | 1 |
What Works Clearinghouse Rating
Yan Jin; Jason Fan – Language Assessment Quarterly, 2023
In language assessment, AI technology has been incorporated in task design, assessment delivery, automated scoring of performance-based tasks, score reporting, and provision of feedback. AI technology is also used for collecting and analyzing performance data in language assessment validation. Research has been conducted to investigate the…
Descriptors: Language Tests, Artificial Intelligence, Computer Assisted Testing, Test Format
Meagan Karvonen; Russell Swinburne Romine; Amy K. Clark – Practical Assessment, Research & Evaluation, 2024
This paper describes methods and findings from student cognitive labs, teacher cognitive labs, and test administration observations as evidence evaluated in a validity argument for a computer-based alternate assessment for students with significant cognitive disabilities. Validity of score interpretations and uses for alternate assessments based…
Descriptors: Students with Disabilities, Intellectual Disability, Severe Disabilities, Student Evaluation
Patael, Smadar; Shamir, Julia; Soffer, Tal; Livne, Eynat; Fogel-Grinvald, Haya; Kishon-Rabin, Liat – Journal of Computer Assisted Learning, 2022
Background: The global COVID-19 pandemic turned the adoption of on-line assessment in the institutions for higher education from possibility to necessity. Thus, in the end of Fall 20/21 semester Tel Aviv University (TAU)--the largest university in Israel--designed and implemented a scalable procedure for administering proctored remote…
Descriptors: COVID-19, Pandemics, Computer Assisted Testing, Foreign Countries
Lynch, Sarah – Practical Assessment, Research & Evaluation, 2022
In today's digital age, tests are increasingly being delivered on computers. Many of these computer-based tests (CBTs) have been adapted from paper-based tests (PBTs). However, this change in mode of test administration has the potential to introduce construct-irrelevant variance, affecting the validity of score interpretations. Because of this,…
Descriptors: Computer Assisted Testing, Tests, Scores, Scoring
Isbell, Daniel R.; Kremmel, Benjamin – Language Testing, 2020
Administration of high-stakes language proficiency tests has been disrupted in many parts of the world as a result of the 2019 novel coronavirus pandemic. Institutions that rely on test scores have been forced to adapt, and in many cases this means using scores from a different test, or a new online version of an existing test, that can be taken…
Descriptors: Language Tests, High Stakes Tests, Language Proficiency, Second Language Learning
Patrick Kyllonen; Amit Sevak; Teresa Ober; Ikkyu Choi; Jesse Sparks; Daniel Fishtein – ETS Research Report Series, 2024
Assessment refers to a broad array of approaches for measuring or evaluating a person's (or group of persons') skills, behaviors, dispositions, or other attributes. Assessments range from standardized tests used in admissions, employee selection, licensure examinations, and domestic and international large-scale assessments of cognitive and…
Descriptors: Assessment Literacy, Testing, Test Bias, Test Construction
Raley, Sheida K.; Shogren, Karrie A.; Rifenbark, Graham G.; Anderson, Mark H.; Shaw, Leslie A. – Journal of Special Education Technology, 2020
The Self-Determination Inventory: Student Report (SDI: SR) was developed to measure the self-determination of adolescents and was recently validated for students aged 13-22 with and without disabilities across diverse racial/ethnic backgrounds. The SDI: SR is aligned Causal Agency Theory and its theoretical conceptualizations of self-determined…
Descriptors: Testing, Self Determination, Scores, Students with Disabilities
Fairbairn, Judith; Spiby, Richard – European Journal of Special Needs Education, 2019
Language test developers have a responsibility to ensure that their tests are accessible to test takers of various backgrounds and characteristics and also that they have the opportunity to perform to the best of their ability. This principle is widely recognised by educational and language testing associations in guidelines for the production and…
Descriptors: Testing, Language Tests, Test Construction, Testing Accommodations
Schmitz, Florian; Wilhelm, Oliver – Journal of Intelligence, 2019
Current taxonomies of intelligence comprise two factors of mental speed, clerical speed (Gs), and elementary cognitive speed (Gt). Both originated from different research traditions and are conceptualized as dissociable constructs in current taxonomies. However, previous research suggests that tasks of one category can be transferred into the…
Descriptors: Taxonomy, Intelligence Tests, Testing, Test Format
Ackerman, Debra J. – ETS Research Report Series, 2020
Over the past 8 years, U.S. kindergarten classrooms have been impacted by policies mandating or recommending the administration of a specific kindergarten entry assessment (KEA) in the initial months of school as well as the increasing reliance on digital technology in the form of mobile apps, touchscreen devices, and online data platforms. Using…
Descriptors: Kindergarten, School Readiness, Computer Assisted Testing, Preschool Teachers
Magno, Carlo – UNESCO Bangkok, 2020
The COVID-19 pandemic has disrupted education across the globe leading countries to adapt how they administer and manage high-stakes examinations and large-scale learning assessments. This thematic review describes the measures that countries have taken, in terms of policies and practices, when learning assessments are disrupted by emergencies and…
Descriptors: High Stakes Tests, COVID-19, Pandemics, Cross Cultural Studies
Li, Xuelian – English Language Teaching, 2019
Based on the articles written by mainland Chinese scholars published in the most influential Chinese and international journals, the present article analyzed the language testing research, compared the tendencies of seven categories between 2000-2009 and 2010-2019, and put forward future research directions by referring to international hot…
Descriptors: Language Tests, Testing, Educational History, Futures (of Society)
Sinclair, Andrea; Deatz, Richard; Johnston-Fisher, Jessica; Levinson, Heather; Thacker, Arthur – Partnership for Assessment of Readiness for College and Careers, 2015
The overall purpose of the research studies described in this report was to investigate the quality of the administration of the Partnership for Assessment of Readiness for College and Careers (PARCC) field test during the spring of 2014. These research studies were conducted for the purpose of formative evaluation. Findings from these studies are…
Descriptors: Standardized Tests, College Readiness, Career Readiness, Test Validity
Rios, Joseph A.; Liu, Ou Lydia – American Journal of Distance Education, 2017
Online higher education institutions are presented with the concern of how to obtain valid results when administering student learning outcomes (SLO) assessments remotely. Traditionally, there has been a great reliance on unproctored Internet test administration (UIT) due to increased flexibility and reduced costs; however, a number of validity…
Descriptors: Online Courses, Testing, Test Wiseness, Academic Achievement
Partnership for Assessment of Readiness for College and Careers, 2014
More than 1 million students in nearly 16,000 schools participated in the spring 2014 Partnership for Assessment of Readiness for College and Careers (PARCC) field test. Fourteen states and the District of Columbia administered the test. The primary purposes of the PARCC field test were to: (1) Examine the quality of test questions and tasks; (2)…
Descriptors: Standardized Tests, College Readiness, Career Readiness, Test Validity