Researchers warn that generative AI models, including ChatGPT, DALL-E, and Midjourney, could distort human beliefs by spreading false, biased information.
Impact of AI on Human Perception
Generative AI models such as ChatGPT, DALL-E, and Midjourney may distort human beliefs through the transmission of false information and stereotyped biases, according to researchers Celeste Kidd and Abeba Birhane. In their perspective, they delve into how studies on human psychology could shed light on why generative AI possesses such power in distorting human beliefs.
Overestimation of AI Capabilities
They argue that society’s perception of the capabilities of generative AI models has been overly exaggerated, which has led to a widespread belief that these models surpass human abilities. Individuals are inherently inclined to adopt the information disseminated by knowledgeable, confident entities like generative AI at a faster pace and with more assurance.
AI’s Role in Spreading False and Biased Information
These generative AI models have the potential to fabricate false and biased information which can be disseminated widely and repetitively, factors which ultimately dictate the extent to which such information can be entrenched in people’s beliefs. Individuals are most susceptible to influence when they are seeking information and tend to firmly adhere to the information once it’s been received.
Implications for Information Search and Provision
The current design of generative AI largely caters to information search and provision. As such, it may pose a significant challenge in changing the minds of individuals exposed to false or biased information via these AI systems, as suggested by Kidd and Birhane.
Need for Interdisciplinary Studies
The researchers conclude by emphasizing a critical opportunity to conduct interdisciplinary studies to evaluate these models. They suggest measuring the impacts of these models on human beliefs and biases both before and after exposure to generative AI. This is a timely opportunity, especially considering that these systems are increasingly being adopted and integrated into various everyday technologies.
Reference: “How AI can distort human beliefs: Models can convey biases and false information to users” by Celeste Kidd and Abeba Birhane, 22 June 2023, Science.
DOI: 10.1126/science.adi0248