Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 3 |
Descriptor
Data | 3 |
Program Validation | 3 |
Automation | 1 |
Barriers | 1 |
Biological Sciences | 1 |
College Entrance Examinations | 1 |
College Students | 1 |
Comparative Analysis | 1 |
Correlation | 1 |
Cross Cultural Studies | 1 |
Cultural Differences | 1 |
More ▼ |
Author
Publication Type
Reports - Research | 3 |
Journal Articles | 2 |
Education Level
Higher Education | 3 |
Postsecondary Education | 2 |
Audience
Location
China | 1 |
United States | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Graduate Record Examinations | 1 |
What Works Clearinghouse Rating
Tandberg, David A.; Martin, Rebecca A. – State Higher Education Executive Officers, 2019
Higher education is facing a host of challenges, including external questions regarding its value and purpose. These questions cut to the core of the states' role in higher education. Traditionally, states have the responsibility to ensure that institutions of higher education are operating in the public interest and that the institutions are good…
Descriptors: Quality Assurance, Educational Quality, Educational Improvement, Higher Education
Prasad, Joshua J.; Showler, Morgan B.; Schmitt, Neal; Ryan, Ann Marie; Nye, Christopher D. – International Journal of Testing, 2017
The present research compares the operation of situational judgement and biodata measures between Chinese and U.S. respondents. We describe the development and past research on both measures, followed by hypothesized differences across the two groups of respondents. We base hypotheses on the nature of the Chinese and U.S. educational systems and…
Descriptors: Measures (Individuals), Hypothesis Testing, Cross Cultural Studies, Comparative Analysis
Ramineni, Chaitanya; Trapani, Catherine S.; Williamson, David M.; Davey, Tim; Bridgeman, Brent – ETS Research Report Series, 2012
Automated scoring models for the "e-rater"® scoring engine were built and evaluated for the "GRE"® argument and issue-writing tasks. Prompt-specific, generic, and generic with prompt-specific intercept scoring models were built and evaluation statistics such as weighted kappas, Pearson correlations, standardized difference in…
Descriptors: Scoring, Test Scoring Machines, Automation, Models