When chatbots were tested for their political inclination, most of them revealed a left-of-centre stance, a study has revealed.
However, when the chatbots, including ChatGPT and Gemini, were tested after being "taught" a certain political inclination -- left, right or centre -- they produced responses in alignment with their "training," or "fine tuning," David Rozado, a researcher from Otago Polytechnic, New Zealand, found.
This shows that chatbots can be "steered" towards desired locations on the political spectrum, using modest amounts of politically aligned data, the author said in the study published in the journal PLoS ONE.
Chatbots are AI-based large language models (LLMs), which are trained on massive amounts of textual data and are, therefore, capable of responding to requests framed in the natural language (prompts).
Multiple studies have analysed the political orientation of chatbots available in the public domain and found them to occupy varied locations on the political spectrum.
In this study, Rozado looked at the potential to "teach" as well as reduce political bias in these conversational LLMs.
On 24 different open- and closed-source chatbots, the author performed political orientation tests such as the Political Compass Test and Eysenck's Political Test.
Along with ChatGPT and Gemini, Anthropic's Claude, Twitter's Grok, Llama 2, among others, were also tested.
He found that most of these chatbots generated "left-of-centre" responses, as adjudged by the majority of the political tests.
Further, using published text, Rozado also induced a political bias by fine-tuning GPT-3.5, a type of machine learning algorithm developed to adapt LLMs to specific tasks.
Thus, a "LeftWingGPT" was created training the model on snippets of text from publications such as The Atlantic, The New Yorker, and from books written by authors with similar political persuasions.
Likewise, for creating "RightWingGPT," Rozado used text from publications such as The American Conservative and books by similarly aligned writers.
Finally, "DepolarizingGPT" was created by training GPT-3.5 using content from the Institute for Cultural Evolution, a US-based think tank, and the book Developmental Politics, written by the institute's president, Steve McIntosh.
"As a result of the political alignment fine-tuning, RightWingGPT has gravitated towards right-leaning regions of the political landscape in the four tests. A (similar) effect is observed for LeftWingGPT.
"DepolarizingGPT is on average closer to political neutrality and away from the poles of the political spectrum," the author wrote.
He, however, clarified that the results were not evidence that the inherent political preferences of the chatbots are "deliberately instilled" by the organisations creating them.
You’ve reached your limit of {{free_limit}} free articles this month.
Subscribe now for unlimited access.
Already subscribed? Log in
Subscribe to read the full story →
Smart Quarterly
₹900
3 Months
₹300/Month
Smart Essential
₹2,700
1 Year
₹225/Month
Super Saver
₹3,900
2 Years
₹162/Month
Renews automatically, cancel anytime
Here’s what’s included in our digital subscription plans
Exclusive premium stories online
Over 30 premium stories daily, handpicked by our editors


Complimentary Access to The New York Times
News, Games, Cooking, Audio, Wirecutter & The Athletic
Business Standard Epaper
Digital replica of our daily newspaper — with options to read, save, and share


Curated Newsletters
Insights on markets, finance, politics, tech, and more delivered to your inbox
Market Analysis & Investment Insights
In-depth market analysis & insights with access to The Smart Investor


Archives
Repository of articles and publications dating back to 1997
Ad-free Reading
Uninterrupted reading experience with no advertisements


Seamless Access Across All Devices
Access Business Standard across devices — mobile, tablet, or PC, via web or app
)