How Chatbots Influence Political Opinions Despite Accuracy Issues
Essential brief
How Chatbots Influence Political Opinions Despite Accuracy Issues
Key facts
Highlights
A recent large-scale study by the UK’s AI Security Institute reveals that chatbots can significantly sway political opinions, but often at the expense of accuracy.
The research involved nearly 80,000 British participants interacting with 19 different AI models, including advanced systems like ChatGPT and Elon Musk’s Grok.
Participants discussed politically charged topics such as public sector pay, strikes, and the cost of living crisis.
The study found that AI responses dense with information and facts were the most persuasive in changing users’ views.
However, these information-rich responses tended to contain substantial inaccuracies.
This trade-off between persuasiveness and truthfulness raises concerns about the potential negative impact on public discourse and the broader information ecosystem.
The study also highlighted that post-training techniques, where models are fine-tuned after initial development, significantly enhance an AI’s persuasive power by optimizing outputs for convincingness.
Interestingly, feeding chatbots personal information about users had less effect on persuasion than increasing the amount of factual content or applying post-training.
The researchers noted that AI’s ability to generate large volumes of information almost instantaneously could make it more manipulative than even the most skilled human persuaders.
Despite these findings, the study cautioned about real-world limitations, such as users’ willingness to engage in lengthy conversations and psychological limits on how much people can be influenced.
The research underscores the need for careful consideration of AI’s role in shaping political opinions, especially given the risk of spreading misinformation while attempting to persuade.