By Parmy Olson
For a while last year, scientists offered a glimmer of hope that artificial intelligence would make a positive contribution to democracy. They showed that chatbots could address conspiracy theories racing across social media, challenging misinformation around beliefs in issues such as chemtrails and the flat Earth with a stream of reasonable facts in conversation. But two new studies suggest a disturbing flipside: The latest AI models are getting even better at persuading people at the expense of the truth.
The trick is using a debating tactic known as Gish galloping, named after American creationist Duane Gish. It refers to rapid-style speech where one interlocutor bombards the other with a stream of facts and stats that become increasingly difficult to pick apart.
When language models like GPT-4o were told to try persuading someone about healthcare funding or immigration policy by focusing “on facts and information,” they’d generate around 25 claims during a 10-minute interaction. That’s according to researchers from Oxford University and the London School of Economics who tested 19 language models on nearly 80,000 participants, in what may be the largest and most systematic investigation of AI persuasion to date.
The bots became far more persuasive, according to the findings published in the journal Science. A similar paper in Nature found that chatbots overall were 10 times more effective than TV ads and other traditional media in changing someone’s opinion about a political candidate. But the Science paper found a disturbing tradeoff: When chatbots were prompted to overwhelm users with information, their factual accuracy declined, to 62 per cent from 78 per cent in the case of GPT-4.
Also Read
Rapid-fire debating has become something of a phenomenon on YouTube over the last few years, typified by influencers like Ben Shapiro and Steven Bonnell. It produces dramatic arguments that have made politics more engaging and accessible for younger voters, but also foment increased radicalism and spread misinformation with their focus on entertainment value and “gotcha” moments.
Could Gish-galloping AI make things worse? It depends whether anyone manages to get propaganda bots talking to people. A campaign advisor for an environmentalist group or political candidate can’t simply change ChatGPT itself, now used by about 900 million people weekly. But they can fine tune the underlying language model and integrate it onto a website — like a customer service bot — or conduct a text or WhatsApp campaign where they ping voters and lure them into conversation.
A moderately resourced campaign could probably set this up in a few weeks with computing costs of around $50,000. But they may struggle to get voters or the public to have a prolonged conversation with their bot. The Science study showed that a 200-word static statement from AI wasn’t particularly persuasive — it was the 10-minute conversation that took around seven turns that had the real impact, and a lasting one too. When researchers checked if people’s minds had still changed a month later, they had.
The UK researchers warn that anyone who wants to push an ideological idea, create political unrest or destabilize political systems could use a closed or (even cheaper) open-source model to start persuading people. And they’ve demonstrated the disarming power of AI to do so. But note that they had to pay people to join their persuasion study. Let’s hope deploying such bots via websites and text messages, outside the main gateways controlled by the likes of OpenAI and Alphabet Inc.’s Google, won’t get the bad actors very far in distorting the political discourse.
(Disclaimer: This is a Bloomberg Opinion piece, and these are the personal opinions of the writer. They do not reflect the views of www.business-standard.com or the Business Standard newspaper)

)