Researchers find what makes AI chatbots politically persuasive

twitter bot

Nearly two years ago, Sam Altman tweeted that AI systems would be capable of superhuman persuasion before achieving general intelligence – a prediction that raised concerns about AI’s impact on democratic elections.

To see whether large language models of conversation can actually influence the public’s political views, scientists from the UK AI Security Institute, MIT, Stanford, Carnegie Mellon and many other institutions conducted the largest study on AI persuasiveness to date, involving nearly 80,000 participants in the UK. It turns out that political AI chatbots fall far short of superhuman persuasiveness, but the study raises some more subtle issues about our interactions with AI.

AI dystopias

Public debate about the impact of AI on politics has largely revolved around notions drawn from dystopian science-fiction. Big language models have access to essentially every fact and story published about any issue or candidate. He has processed information from books on psychology, negotiation and human manipulation. They can rely on extremely high computing power in huge data centers around the world. Furthermore, they often have access to a wealth of personal information about individual users thanks to the hundreds of thousands of online interactions they have available to them.

Talking to a powerful AI system is basically interacting with an intelligence that knows everything about everything, as well as almost everything about you. When viewed this way, the LLM can actually seem a little scary. The goal of this new massive AI-inspired study was to break down such scary scenarios into their component pieces and see if they really hold water.

The team examined 19 LLMs, including three different versions of ChatGPT and the most powerful LLMs, such as XAI’s Grok-3 beta, as well as a range of smaller, open source models. The AI ​​was asked to advocate for or against specific stances on 707 political issues chosen by the team. Advocacy was conducted by engaging in short conversations with paid participants enlisted through crowdsourcing platforms. Each participant had to rate their agreement with a specific stance on a specified political issue on a scale of 1 to 100 before and after talking to the AI.



<a href

Leave a Comment