NotesFAQContact Us
Collection
Advanced
Search Tips
Back to results
Peer reviewed Peer reviewed
Direct linkDirect link
ERIC Number: EJ1475713
Record Type: Journal
Publication Date: 2025-Aug
Pages: 41
Abstractor: As Provided
ISBN: N/A
ISSN: ISSN-0049-1241
EISSN: EISSN-1552-8294
Available Date: 0000-00-00
Machine Bias. How Do Generative Language Models Answer Opinion Polls?
Sociological Methods & Research, v54 n3 p1156-1196 2025
Generative artificial intelligence (AI) is increasingly presented as a potential substitute for humans, including as research subjects. However, there is no scientific consensus on how closely these in silico clones can emulate survey respondents. While some defend the use of these "synthetic users," others point toward social biases in the responses provided by large language models (LLMs). In this article, we demonstrate that these critics are right to be wary of using generative AI to emulate respondents, but probably not for the right reasons. Our results show (i) that to date, models cannot replace research subjects for opinion or attitudinal research; (ii) that they display a strong bias and a low variance on each topic; and (iii) that this bias randomly varies from one topic to the next. We label this pattern "machine bias," a concept we define, and whose consequences for LLM-based research we further explore.
SAGE Publications. 2455 Teller Road, Thousand Oaks, CA 91320. Tel: 800-818-7243; Tel: 805-499-9774; Fax: 800-583-2665; e-mail: journals@sagepub.com; Web site: https://sagepub.com
Publication Type: Journal Articles; Reports - Research
Education Level: N/A
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: N/A
Grant or Contract Numbers: N/A
Author Affiliations: 1CERAPS, Faculté des sciences juridiques politiques et sociales, Université de Lille, France; 2CREST, ENSAE, Institut Polytechnique de Paris, Paris, France; 3CREST, Ecole Polytechnique, Institut Polytechnique de Paris, Paris, France; 4CREST, CNRS, Institut Polytechnique de Paris, Paris, France