Business Standard

China adapts Meta's AI model for military use amid US security concerns

The research signals that military analysts within China's People's Liberation Army (PLA) are exploring open-source LLMs, especially those from Meta, as potential tools for defence-related application

china Flag, China

Photo: Shutterstock

Abhijeet Kumar New Delhi

Listen to This Article

Researchers in China have reportedly adapted Meta's open-source large language model (LLM), Llama 2 13B, customising it with unique parameters to create an AI tool designed to support intelligence operations and enhance military decision-making capabilities.
 
The model, called ChatBIT by the researchers, is said to excel in dialogue and question-answering tasks relevant to military scenarios, according to their published findings. The model outperformed some other AI systems, reportedly achieving nearly 90 per cent of OpenAI's ChatGPT-4 capabilities. However, the authors did not clarify specific performance metrics or indicate if ChatBIT is actively deployed, according to a Reuters report.

China exploring military applications of AI-Llama 2

The research suggests that military analysts within China’s People’s Liberation Army (PLA) are exploring open-source LLMs, particularly those from Meta, as potential tools for defence-related applications.
 
 
Meta, which supports an open-release model for many of its AI systems, including Llama, enforces certain use restrictions. For instance, entities serving more than 700 million users must secure a licence from Meta, and the terms explicitly prohibit using the models for military, nuclear, espionage, or other sensitive activities.
 
However, the company acknowledges that, given the open nature of its models, there are limited means to ensure compliance. Meta’s director of public policy, Molly Montgomery, stated, “Any use of our models by the People’s Liberation Army is unauthorised and contrary to our acceptable use policy.”

China’s AI strategy amid international security concerns

This development comes amid intense debate in US technology and security circles over open-source AI accessibility. Recently, the US government announced measures to manage risks related to AI by restricting investments in sectors like AI in China that may pose national security risks. A Pentagon statement echoed this approach, with spokesperson John Supple noting that “open-source models have both advantages and disadvantages,” and adding that the Department of Defense would closely monitor global AI developments.
 
William Hannas, a lead analyst at Georgetown University’s Center for Security and Emerging Technology, was quoted in the report, suggesting that efforts to restrict Chinese scientists from accessing Western AI advancements are unlikely to be fully effective.

China’s experiments with AI models

Earlier in May, China launched a chatbot trained on “Xi Jinping Thought,” a political philosophy that underscores the country’s focus on socialism with Chinese characteristics. Developed by China’s Cyberspace Academy, this AI model reflects content and principles enshrined in China's 2018 constitution, covering political, social, and economic dimensions in line with Xi’s doctrine. According to an announcement from the Cyberspace Administration of China (CAC), six of the seven databases used for training the chatbot were sourced from CAC’s own repositories, primarily encompassing information technologies.
 
This language model (LLM) is distinct from other AI systems due to its reliance on a specifically curated, domestic knowledge base and its closed-source design, which China claims ensures "security and reliability.” A WeChat post from CAC’s magazine stated that the chatbot can provide answers, generate reports, summarise information, and even translate between Chinese and English—all within the ideological framework of “Xi Jinping Thought on Socialism with Chinese Characteristics for a New Era.”
 
Unlike many open-source AI models, this chatbot operates exclusively on the servers of the China Cyberspace Research Institute, with all data processing confined to these localised systems. Demonstrations indicate that its responses are based strictly on sanctioned Chinese documents and official sources, reinforcing its alignment with state-approved information. The AI model remains in internal testing and is accessible only to selected users by invitation.
 
(With inputs from Reuters)

Don't miss the most important news and views of the day. Get them on our Telegram channel

First Published: Nov 01 2024 | 2:31 PM IST

Explore News