
Imagine having a casual chat with an AI chatbot and walking away with a completely different opinion on a political issue you felt strongly about just ten minutes earlier. Sounds like a science fiction movie, but it’s already happening. New research shows that leading AI models are becoming highly effective at persuasion and, in some cases, even more convincing than humans. They are not just sharing facts but tailoring responses to the individual, using tone, evidence, and personalisation in a way that can subtly sway opinions.
According to a report by Financial Express, studies conducted by the UK’s AI Security Institute, in collaboration with universities including Oxford and MIT, found that AI models like OpenAI’s GPT-4, GPT-4.5, GPT-4o, Meta’s Llama 3, xAI’s Grok 3, and Alibaba’s Qwen could influence political views in conversations lasting less than ten minutes. What’s more, the changes in opinion were not fleeting. A significant portion of participants retained their new views even a month later.
The researchers didn’t rely on AI’s default behaviour alone. They fine-tuned these models using thousands of conversations on divisive topics like healthcare funding and asylum policy. By rewarding outputs that matched the desired persuasive style and by adding personalised touches — such as referencing the user’s age, political leanings, or prior opinions — the AI became even more convincing. In fact, personalisation increased its persuasiveness by about five per cent compared to generic responses.
While that may not sound huge, in the context of influencing public opinion, it’s substantial. Political campaigns spend millions chasing even a one per cent swing in voter sentiment. The ability to get that shift in minutes, at scale, is both impressive and alarming. I think this is where the real debate begins; it’s one thing for AI to sell you a new smartphone, but quite another for it to nudge your stance on government policy.
The study also highlighted that AI persuasion isn’t limited to politics. Earlier research from MIT and Cornell showed these models could reduce belief in conspiracy theories, climate change denial, and vaccine scepticism by engaging in personalised, evidence-based conversations. While that sounds like a positive use case, it reinforces the fact that the same skillset could be applied in less ethical ways, such as spreading misinformation or promoting harmful ideologies.
Be the first to comment