Suspicions turn into numbers. Two studies published in Science and Nature confirm that AI chatbots, similar to those everyone uses, can shift voting preferences by several points, up to about 15% in controlled scenarios.
Researchers at Cornell University and the UK AI Security Institute tested a very simple situation: a voter, a candidate, and a political chatbot. First, participants rated a candidate. Then, they discussed with an artificial intelligence chatbot programmed to defend that candidate. Finally, they rated the candidate again. On the surface, nothing extraordinary: a brief conversation, a few arguments, a revised rating.
The results, however, are anything but trivial. In the United States, before the 2024 presidential election, a simple exchange of this kind was enough to shift a candidate’s rating by several points, especially when the bot supported the opposite camp to the participant’s initial preference.
The same pattern appears in Canada and Poland with shifts of up to about ten points on a 0 to 100 scale.
Above all, the effect is not symmetrical: a chatbot preaching for a candidate already appreciated reinforces convictions, but one defending the “wrong” camp sometimes manages to crack the resistance. In other words, AI does not just comfort the convinced; it begins to undermine the certainties of opponents.
Studies agree on one key point : what persuades the most are messages centered on public policies, whether economic measures, taxation, security, or health, rather than personality elements or storytelling. When the chatbot presents numerical arguments, program comparisons, and references to facts, real or supposed, the impact on voting intentions is markedly stronger.
But this power has a cost . Researchers note a harsh trade-off between persuasion and accuracy: the most convincing models are also those producing the highest number of inaccurate statements.
In several experiments, bots favoring right-wing candidates generated more errors or misleading claims than those aligned with left-wing candidates, revealing an imbalance in what the models “really know.”
Meanwhile, the second study conducted on 19 AI language models and nearly 77,000 adults in the UK shows that the key is not so much the model size as how it is guided through AI prompts. Instructions that encourage these models to introduce new information significantly increase persuasive power but again degrade factual accuracy. More arguments, more impact, less truth.
In this context, the rise of AI no longer limits itself to just political chatbots. Tether has just bet 70 million euros on Generative Bionics to accelerate the development of humanoid AIs, illustrating how these systems, virtual or embodied, are expected to interact increasingly with the public and influence opinions on a large scale.