CSIS - Center for Strategic and International Studies Inc.

07/15/2024 | Press release | Distributed by Public on 07/15/2024 14:14

Anatomy of a Technology Blockade: Unpacking the Outbound Investment Order

Anatomy of a Technology Blockade: Unpacking the Outbound Investment Order

Photo: Michael M. Santiago/Getty Images

Commentary by Barath Harithas

Published July 15, 2024

Introduction

On June 21, 2024, the U.S. Department of the Treasury issued a Notice of Proposed Rulemaking (NPRM) to implement President Biden's Executive Order 14105 (Outbound Investment Order). The NPRM builds on the Advanced Notice of Proposed Rulemaking (ANPRM) released in August 2023. The Treasury Department is accepting public comments until August 4, 2024, and plans to release the finalized regulations later this year.

The rules seek to address what the U.S. government sees as a loophole in the trade and investment restriction tool kit. While U.S. companies have been barred from selling sensitive technologies directly to China under Department of Commerce export controls, U.S. investors have been free to continue investing in Chinese firms developing the same technologies. U.S. capital may thus be inadvertently fueling Beijing's indigenization drive.

The proposed rules aim to restrict outbound U.S. investments into Chinese companies developing the troika of "force-multiplier" technologies: (1) semiconductors and microelectronics, (2) artificial intelligence (AI), and (3) quantum information technologies.

Broadly, the outbound investment screening mechanism (OISM) is an effort scoped to target transactions that enhance the military, intelligence, surveillance, or cyber-enabled capabilities of China. U.S. investments will be either: (1) prohibited or (2) notifiable, based on whether they pose an acute national security risk or may contribute to a national security threat to the United States, respectively.

However, the criteria defining what constitutes an "acute" or "national security risk" are somewhat elastic. In certain instances, it is targeted, prohibiting investments in AI systems or quantum technologies explicitly designed for military, intelligence, cyber, or mass-surveillance end uses, which are commensurate with demonstrable national security concerns. However, the NPRM also introduces broad carveout clauses under each covered category, which effectively proscribe investments into entire classes of technology, including the development of quantum computers, AI models above certain technical parameters, and advanced packaging techniques (APT) for semiconductors. This suggests that the OISM's remit extends beyond immediate national security applications to include avenues that may allow Chinese technological leapfrogging.

Tripwires and Thresholds: A Deep Dive into Strategic Sectors

Semiconductors & Microelectronics

The NPRM largely aligns with current existing export controls, apart from the addition of APT, and prohibits U.S. investments to develop, produce, design, or fabricate

  1. electronic design automation (EDA) software;
  2. semiconductor manufacturing equipment;
  3. extreme ultraviolet lithography (EUV) machines;
  4. integrated circuits (logic/memory/DRAM) that meet or exceed performance parameters under existing export controls;
  5. supercomputers exceeding 100 petaflops (64-bit) or 200 petaflops (32-bit) in compact form (41,600 cubic feet); and
  6. APT.

The prohibition of APT under the OISM marks a shift in the U.S. approach to maintaining a competitive edge over China in the semiconductor industry. Instead of just focusing on individual chip performance gains through continuous node advancement-such as from 7 nanometers (nm) to 5 nm to 3 nm-it has started to recognize the importance of system-level performance gains afforded by APT. Current semiconductor export controls have largely fixated on obstructing China's access and capacity to produce chips at the most advanced nodes-as seen by restrictions on high-performance chips, EDA tools, and EUV lithography machines-reflect this thinking. This was based on the long-standing assumption that the primary driver for improved chip performance will come from making transistors smaller and packing more of them onto a single chip.

However, with the slowing of Moore's Law, which predicted the doubling of transistors every two years, and as transistor scaling (i.e., miniaturization) approaches fundamental physical limits, this approach may yield diminishing returns and may not be sufficient to maintain a significant lead over China in the long term.

APT helps overcome the constraints of traditional transistor scaling. They facilitate system-level performance gains through the heterogeneous integration of different chip functionalities (e.g., logic, memory, and analog) in a single, compact package, either side-by-side (2.5D integration) or stacked vertically (3D integration).

As a result of the increased proximity between components and greater density of connections within a given footprint, APT unlocks a series of cascading benefits. The reduced distance between components means that electrical signals have to travel a shorter distance (i.e., shorter interconnects), while the higher functional density enables increased bandwidth communication between chips due to the greater number of parallel communication channels available per unit area. Together, these enable faster data transfer rates as there are now more data "highway lanes," which are also shorter. The advantages extend beyond just speed. Shorter interconnects are less susceptible to signal degradation, reducing latency and increasing overall reliability. Crucially, ATPs improve power efficiency since there is less resistance and capacitance to overcome.

These features are increasingly important in the context of training large frontier AI models. Current large language models (LLMs) have more than 1 trillion parameters, requiring multiple computing operations across tens of thousands of high-performance chips inside a data center. According to unverified but commonly cited leaks, the training of ChatGPT-4 required roughly 25,000 Nvidia A100 GPUs for 90-100 days. Efficient training of large models demands high-bandwidth communication, low latency, and rapid data transfer between chips for both forward passes (propagating activations) and backward passes (gradient descent). These are precisely the issues that APT overcomes or mitigates. The increased power efficiency afforded by APT is also particularly important in the context of the mounting energy costs for training and running LLMs.

Importantly, APT could potentially allow China to technologically leapfrog the United States in AI. By focusing on APT innovation and data-center architecture improvements to increase parallelization and throughput, Chinese companies could compensate for the lower individual performance of older chips and produce powerful aggregate training runs comparable to U.S. AI labs.

In addition to the prohibitions above, the OISM also requires notifications for

  1. design of any integrated circuit not prohibited;
  2. fabrication of any integrated circuit not prohibited; and
  3. packaging of any integrated circuit not prohibited.

The U.S. government is seeking greater visibility on a range of semiconductor-related investments, albeit retroactively within 30 days, as part of its information-gathering exercise.

AI Systems

AI systems are the most open-ended section of the NPRM. It both narrowly targets problematic end uses while containing broad clauses that could sweep in multiple advanced Chinese consumer AI models. It prohibits the development of AI systems that are designed for restricted end uses or that exceed certain technical parameters, including

  1. military, government intelligence, or mass surveillance end uses
  2. systems trained using computing power greater than 10^24, 10^25, or 10^26 FLOP (floating-point operations); and
  3. systems trained using 10^23 or 10^24 FLOP using primarily biological sequence data.

The OISM also proposes notifications for AI systems that are:

  1. trained using computing power greater than 10^23 or 10^24 or 10^25 FLOP.

For the uninitiated, FLOP measures the amount of computational power (i.e. compute) required to train an AI system. It is used as a proxy for the capabilities of AI systems as advancements in AI from 2012 have closely correlated with increased compute.

As a reference marker, an AI model that is 10^23, 10^24 or 10^25 FLOP roughly corresponds to the size of ChatGPT-3, 3.5, and 4 respectively. Treasury is still deciding among the compute alternatives and will likely set the relevant amount of compute under notifiable transactions below the amount of compute for the corresponding prohibited transactions (i.e. if 10^24 FLOP is selected for prohibited investments, notifiable investments will be set at 10^23 FLOP).

In terms of the Chinese landscape, according to Epoch AI, as of April 2024, there were , 5 Chinese AI models above 10^24 FLOP (Qwen-72B by Alibaba, XVERSE-65B by XVERSE Technology, Chat GLM 3 by Zhipu Ai, Tigerbot-70B by Tigerobo, and ERNIE 3.0 TITAN by Baidu), and 14 Chinese AI models above 10^23 FLOP (Qwen-7B by Alibaba, Qwen-14B by Alibaba, Yuan 1.0 by Inspur, GLM-130B by Tsinghua University, xTrimoPGLM-100B by Tsinghua University, Blue LM-13B by vivo AI lab, Naibeige-16B by Nanbeige LLM Lab, Skywork-13B by Kunlun Inc, Baichuan2-13B by Baichuan, CodeFuse-13B by Ant Group, DeepSeek Coder 33B by DeepSeek, DeepSeek LLM 67B by Deepseek, PanGu-Σ by Huawei Noah's Ark Lab, and Yi-34B by 01.AI).

The reason the United States has included general-purpose frontier AI models under the "prohibited" category is likely because they can be "fine-tuned" at low cost to carry out malicious or subversive activities, such as creating autonomous weapons or unknown malware variants. Fine-tuning refers to the process of taking a pre-trained AI model, which has already learned generalizable patterns and representations from a larger dataset, and further training it on a smaller, more specific dataset to adapt the model for a particular task. Similarly, the use of biological sequence data could enable the production of biological weapons or provide actionable instructions for how to do so.

The use of compute benchmarks, however, especially in the context of national security risks, is somewhat arbitrary. Unlike nuclear weapons, for example, AI does not have a comparable "enrichment" metric that marks a transition to weaponization. In addition, the compute used to train a model does not necessarily reflect its potential for malicious use. Smaller, specialized models trained on high-quality data can outperform larger, general-purpose models on specific tasks. For example, the landmark experiment that has become the poster child for how AI can manufacture novel pathogens used a model built on a public database in 2020 that would fall well under the 10^23 threshold. Furthermore, different types of AI-enabled threats have different computational requirements. AI-enabled cyberattacks, for example, might be effectively conducted with just modestly capable models.

Moreover, compute benchmarks that define the state of the art are a moving needle. In 2020, there were only 11 models which exceeded 10^23 FLOP. As of 2024, this has grown to 81 models. And as advances in hardware drive down costs and algorithmic progress increases compute efficiency, smaller models will increasingly access what are now considered dangerous capabilities.

Lastly, there are potential workarounds for determined adversarial agents. They can "chain" together multiple smaller models, each trained below the compute threshold, to create a system with capabilities comparable to a large frontier model or simply "fine-tune" an existing and freely available advanced open-source model from GitHub.

Quantum Information Technology

Unlike semiconductors, microelectronics, and AI systems, there are no notifiable transactions for quantum information technology. The NPRM prohibits wholesale U.S. investments to develop or produce

  1. quantum sensing platforms;
  2. quantum networks or quantum communication systems; and
  3. quantum computers or critical components required to produce a quantum computer.

The first two categories contain end-use provisions targeting military, intelligence, or mass surveillance applications, with the latter specifically targeting the use of quantum technologies for encryption breaking and quantum key distribution.

These prohibitions aim at obvious and direct national security concerns. Unlike other quantum technology subcategories, the potential defense applications of quantum sensors are relatively clear and achievable in the near to mid-term. According to a report by the Institute for Defense Analyses, within the next five years, China could leverage quantum sensors to enhance its counter-stealth, counter-submarine, image detection, and position, navigation, and timing capabilities. Quantum computing also threatens to break current encryption standards, posing warranted cybersecurity risks.

The NPRM also prohibits U.S. investments to develop or produce quantum computers and their components in China entirely. The rules estimate that, while significant technical challenges remain given the early state of the technology, there is a window of opportunity to restrict Chinese access to critical developments in the field. This contrasts with semiconductor export controls, which were implemented after significant technological diffusion had already occurred and China had developed native industry strengths. By acting preemptively, the United States is aiming to maintain a technological advantage in quantum from the outset.

The "Radar Effect": The OISM's Information-Gathering Potential

The notifications required under the OISM will call for companies to provide detailed information about their investments in China, providing a dynamic, high-resolution snapshot of the Chinese investment landscape. This data will be fed back to the U.S. government, providing visibility on aggregate and sectoral trends and enabling a bidirectional feedback loop to fine-tune or strengthen export controls and investment screening based on gaps or deficits. In addition, by triangulating various notifications, this system could identify "stealth" technological developments in China that may have slipped under the radar and serve as a tripwire for potentially problematic Chinese transactions into the United States under the Committee on Foreign Investment in the United States (CFIUS), which screens inbound investments for national security risks.

The OISM goes beyond existing rules in several ways. It not only fills a policy gap but sets up a data flywheel that could introduce complementary effects with adjacent tools, such as export controls and inbound investment screening.

Looking Ahead

The effectiveness of the proposed OISM hinges on a number of assumptions: (1) that the withdrawal of U.S. capital will be damaging to the Chinese technological landscape, and (2) that U.S. technology scaling know-how and tacit knowledge, which have to-date been bundled together with capital, are nonreplicable.

Data from the Rhodium Group shows that U.S. venture capital in China has already fallen off from the peak of $14.4 billion in 2018 to $1.3 billion in 2022. More work also needs to be done to estimate the level of expected backfilling from Chinese domestic and non-U.S. foreign investors. Moreover, while the United States has historically held a significant advantage in scaling technology companies globally, Chinese companies have made significant strides over the past decade. China may well have enough industry veterans and accumulated know-how to coach and mentor the next wave of Chinese champions.

The United States will also need to secure allied buy-in. Encouragingly, the United States has already started to socialize outbound investment screening at the G7 and is also exploring the inclusion of an "excepted states" clause similar to the one under CFIUS.

Barath Harithas is a senior fellow in the Project on Trade and Technology at the Center for Strategic and International Studies in Washington, D.C.

Commentary is produced by the Center for Strategic and International Studies (CSIS), a private, tax-exempt institution focusing on international public policy issues. Its research is nonpartisan and nonproprietary. CSIS does not take specific policy positions. Accordingly, all views, positions, and conclusions expressed in this publication should be understood to be solely those of the author(s).

© 2024 by the Center for Strategic and International Studies. All rights reserved.

Image
Senior Fellow, Project on Trade and Technology

Programs & Projects