Your Digital Self: Another U.S.-China conflict is escalating. This one could be the most serious of all.

United States

There’s a growing need for international agreements and norms governing the use of AI in military applications.

In the era where technological prowess increasingly dictates global power, it’s no wonder that both the U.S. and China strive to perfect the application of artificial intelligence (AI) in military applications. 

China’s stance on artificial intelligence (AI) integration into its military strategy highlights its ambition to challenge the U.S. long-standing military superiority in the Indo-Pacific region. Under President Xi Jinping’s guidance, China is striving  to create a defense strategy for the new era. 

At the 20th CPC (Communist Party of China) National Congress, Xi underlined Beijing’s commitment to AI development and intelligent warfare (the term refers to military systems enhanced by AI), signifying a major shift in China’s approach to national security and military operations??.

In 2015, China implemented substantial military reforms, partly to shift the People’s Liberation Army (PLA)’s focus from land-based territorial defense to extended power projection. A key element of these reforms was the creation of the Strategic Support Force (SSF), which centralized missions across broader domains and was tasked with integrating strategic functions such as space, cyberspace, information and psychological warfare. 

The SSF plays a pivotal role in China’s AI military strategy. It has been responsible for a significant number of AI-related equipment contracts within the PLA, including investing heavily in AI innovation across multiple domains including intelligence, surveillance, reconnaissance, autonomous vehicles, information and electronic warfare, and target recognition. The investment is part of a broader effort to leverage public-private partnerships for new technologies??.

The domains in which the SSF operates, such as space, cyber, and information operations, are particularly well-suited for prioritized integration of AI-enabled applications due to the high data volumes associated with these areas. The SSF’s functions, which involve processing diverse, high-volume, and rapidly changing streams of information, are excellent candidates for the application of AI. This includes creating and maintaining situational awareness, analyzing the consequences of potential courses of action, as well as planning and wargaming??.

China is exploring AI uses in managing large constellations of satellites and analyzing earth observation datasets.

In terms of space operations, China is exploring AI uses in managing large constellations of satellites and analyzing earth observation datasets. This includes processing satellite imagery for target identification, developing cognitive radio systems for more efficient space-based communications, operating satellites autonomously, and reducing the workload of ground satellite operators??.

China’s strategic approach, facilitated by the SSF, aligns with the broader civil-military fusion initiative. This initiative aims to overcome barriers that have previously restricted the PLA’s ability to leverage advancements from China’s high-tech commercial and academic sectors. Such collaborations are essential for China’s ambition to establish global AI leadership and apply this technology effectively in military missions, reflecting a strategic effort to counterbalance U.S. military superiority in the Indo-Pacific region??.

The United States, recognizing the strategic importance of artificial intelligence (AI) in modern warfare, has also been actively developing its AI capabilities, particularly in military applications, to maintain its global power status. 

The U.S. Department of Defense (DoD) released its strategy in 2023, focusing on accelerating the adoption of advanced AI capabilities. This strategy emphasizes maintaining decision superiority on the battlefield, building on years of DoD leadership in AI development. It aims to solidify the U.S.’s competitive advantage in deploying this emerging technology, as highlighted by Deputy Defense Secretary Kathleen Hicks??.

As part of a broader effort to recruit and retain a highly skilled workforce, the DoD’s new cyber workforce strategy, set to cover 2023-2027, introduces dozens of updated work roles, including new AI and data-focused specializations.

First one of key initiatives is project “Replicator” — an initiative aimed at deploying thousands of AI-enabled autonomous vehicles by 2026. “Replicator” seeks to galvanize U.S. military innovation using small, smart, cheap, and numerous platforms. 

AI-assisted tools are also being employed in space, a crucial domain in military competition.

AI-assisted tools are also being employed in space, a crucial domain in military competition. The U.S. Space Force uses an operational prototype called “Machina” to autonomously monitor over 40,000 objects in space. This system orchestrates data collection with a global telescope network, utilizing algorithms, computer vision and large-language models??.

The Air Force leverages AI’s predictive powers for aircraft maintenance, anticipating needs for a fleet of more than 2,600 aircraft. AI models identify potential failures hours in advance. Additionally, a health-related pilot project tracks the fitness of more than 13,000 soldiers in the Army’s Third Infantry Division, using predictive modeling to reduce injuries and enhance performance??.

The Pentagon is developing intertwined battle networks, known as Joint All-Domain Command and Control. This initiative automates the processing of various data types across the armed services, aiming to rapidly connect combatants. It is a part of the broader effort in “human-machine teaming,” employing unmanned vehicles and other autonomous technologies??.

The Pentagon’s 2023 Data, Analytics, and Artificial Intelligence Adoption Strategy envisions AI as a cornerstone for maintaining decision superiority, enhancing situational awareness, and improving the speed and accuracy of military decisions. The document encourages investing in interoperable infrastructure, advancing the AI ecosystem, and improving data management. This holistic approach demonstrates a commitment to overcome challenges and leverage AI’s transformative potential while adhering to ethical standards in military application.

A disruption in the supply of high-quality semiconductors could significantly impact the U.S. ability to maintain and advance its AI capabilities in the military sector. 

While these initiatives showcase the U.S.’s commitment to integrating AI into its military strategy, challenges remain. Despite initiatives like the aforementioned strategy and the CHIPS and Science Act, which aims to bolster domestic chip manufacturing, semiconductor imports increased by 13% in the U.S. in 2023, underscoring a considerable dependency on foreign sources for these essential components. Taiwan and South Korea, through Taiwan Semiconductor Manufacturing TSM, +1.07% and Samsung Electronics 005930, +1.54%, respectively, remain key semiconductor sources. 

AI systems require advanced computing capabilities, which are heavily dependent on high-quality semiconductors. A disruption in the supply of these components could significantly impact the U.S. ability to maintain and advance its AI capabilities in the military sector. 

In response to the race for AI dominance reaching a fever pitch, the United States and China have both recognized the need for dialogue, particularly in light of rising tensions over Taiwan, a potential nuclear arms race, and espionage allegations. Both nations have engaged in crucial talks about the future use of AI in sensitive military areas, as highlighted during the Asia Pacific Economic Cooperation (APEC) summit in San Francisco last month. These discussions focus on addressing the risks associated with using advanced AI systems in military functions??.

One of the critical areas of concern in these talks is the automation of nuclear command and control (C2) systems with AI. Experts, like Oriana Skylar Mastro from Stanford University, have pointed out the need for an agreement to avoid such automation. The current human oversight in decision-making and launch procedures for nuclear systems is seen as crucial, and there is skepticism about reaching an agreement on AI-related nuclear arms control??.

Despite these talks, it appears unlikely that either side will accept a ban that limits their freedom to deploy AI in their militaries. U.S. law, for instance, prevents the president from making such a commitment without congressional approval. In contrast, China is more interested in an expanded dialogue on AI rather than reaching a specific agreement. This stance reflects China’s broader diplomatic strategy to counterbalance and rival the U.S. in setting future AI standards, especially in the military domain????.

AI-enabled weapon systems, making decisions without human intervention, are still not 100% reliable.

As global powers fight for AI supremacy, there’s another danger looming for civilians: the use of military-grade AI applications for continuous and systematic surveillance of civilian targets. China, a champion in the field of trampling human rights leads the way, and there is a potential that, as cyberwarfare reaches its pinnacle, the U.S. might be tempted to follow. 

The potential for surveillance and control enabled by AI technologies is at odds with democratic principles, leading to a society where technology undermines rather than upholds human rights and freedoms.

But civilian liberties aren’t the only point of concern here. AI isn’t perfect, and loss of human control over autonomous systems can result in catastrophic consequences in war. AI-enabled weapon systems, making decisions without human intervention, are still not 100% reliable, resulting in unintended escalation of conflicts, friendly fire, as well as civilian casualties.  Finally, aside from errors in judgement, AI systems can also manipulated and sabotaged (e.g., through data poisoning). 

As a consequence of all this, another problem arises — difficulty in attributing responsibility for actions taken by these flawed autonomous systems. 

By now it should be obvious: there’s a growing need for international agreements and norms governing the use of AI in military applications. Without these, the development of AI technologies might occur in a moral vacuum, leading to uses that are profoundly at odds with humanitarian principles and international law. This includes concerns over targeted surveillance, privacy infringements, and potential human rights abuses.

It’s good news then that the issue of regulating the use of AI in military applications is gaining increasing attention at the international level, with various initiatives and declarations emerging. Political Declaration on Responsible Military Use of AI and Autonomy, initiated in February 2023 at the Responsible AI in the Military Domain Summit (REAIM 2023) in The Hague, was launched by several countries, including the United States and many European nations. It provides a normative framework addressing the responsible use of AI in the military domain. The declaration is non-legally binding and aims to build international consensus around responsible behavior, guiding states in the development, deployment, and use of military AI. 

The European Parliament has called for an EU legal framework on AI that includes definitions and ethical principles, particularly focusing on its military use. The report adopted by the European Parliament emphasizes that human dignity and human rights must be respected in all EU defense-related activities. It advocates for human oversight in AI-enabled systems and stresses the importance of human control in decision-making processes, especially concerning lethal autonomous weapon systems (LAWS).

The EU is pushing for a strategy to prohibit LAWS and a ban on “killer robots,” reiterating the need for human intervention in military decisions involving AI. The text also calls for the EU to play a leading role in establishing a global framework for the military use of AI, in collaboration with the United Nations and the international community??.

The movement towards such regulations, though still in its early stages and primarily consisting of non-binding declarations, represents a crucial step in addressing the ethical and legal challenges posed by the integration of AI in military contexts.

Perhaps, rather than fearing killer machines in war, we should be more concerned with them potentially appearing at our doorsteps in the near future. Military advancements often find their way into domestic use. With AI’s vast capabilities, the risk of such technology being used for widespread surveillance, social control and the suppression of dissent is real and alarming. The challenge lies in ensuring that these powerful tools are governed by robust ethical frameworks and legal safeguards to prevent their abuse, both domestically and internationally.

More: Happy 1st birthday, ChatGPT: OpenAI’s viral chatbot reveals humans’ potential — and our fears

Also read: Moody’s puts negative outlook on China’s debt on expectations of regional government bailout