The views expressed by contributors are their own and not the view of The Hill

West needs to focus on semiconductor trust, right now

iStock


In May of 2021, IBM announced that they have created the world’s first 2-nanometer chip. This is a breathtaking accomplishment. We now know how to create semiconductor devices whose structural features are measured by a few baker’s dozens of atoms; 25 years ago, when the United States and Europe dominated chip manufacturing, the industry produced components at the 250-nanometer node, more than 100 times larger. Because the performance difference goes as the square of the node, today’s devices have about 10,000 times more processing power.

The United States still holds a significant edge in the software tools needed for semiconductor design, and many of the manufacturing innovations in materials science and photolithography come from our laboratories; however, practically all advanced electronics are physically fabricated in Asia. The 2-nanometer breakthrough will certainly migrate there as well, and far more quickly than our R&D, commercial, and trade policies currently anticipate. 

Asia’s emergence in device manufacturing traces back to 1994, when then-Defense Secretary William Perry mandated that services use “commercial specifications and standards instead of military standards.” At the time and in historical context, this was a natural, even celebrated, decision. The United States had little apprehension of resurgent — never mind revanchist — Russian influence. Even more importantly, China’s economy was less than a tenth of ours, and their technology base was dependent upon absorption, adoption, and occasional theft from the West.

Despite deep cultural differences and almost no intellectual property protection, the labor-and-facilities arbitrage made China a particularly attractive host for new factories. Their excellent educational standards sealed the deal for many high-tech investments, including capital-intensive semiconductor manufacturing facilities. Now, in the waning days of the COVID-19 pandemic, we better understand that offshoring our semiconductor supply chain created problems that seriously threaten our national security, economic liberty, and even personal freedoms.

1994 was the year before Netscape went public. There was no general awareness — never mind fear — of cybersecurity. Fifteen years later, most chip companies outsourced their most-advanced manufacturing expertise to Asia. This is directly analogous to how book publishers compete on author reviews and customer awareness but contract out the actual assembly to content-agnostic printers who compete on paper, energy, ink, and low labor costs.

In other words, we conceive the chips at home and send the circuit schematics to foundries overseas.

The physical devices are tested and inspected by the “publishers,” but the focus is on compliance to the functional specifications. Until recently, no one seriously considered the possibility that the factory itself could, or would, change the product. Now we know that they can, and sometimes do.

China’s response to U.S.-instigated tariff wars and international trade restrictions was to raise the priority of their digital self-sufficiency and “so-called third-generation chip development.” As Nicholas Christakis wrote, “Before the pandemic, the emphasis was on just-in-time production, with parts being delivered just when they were needed in the manufacturing process … But in the post-pandemic period, the emphasis could shift to just-in-case supply chains.” China is investing in capacity that we are not.

For semiconductors, any contingency plan must focus on reliability and trusted sources. While efforts to repatriate physical plants are part of the solution, they require far more coordination than we have, and will take years to complete. According to the White House’s supply chain report from June of last year, “In 2019, of six new semiconductor production facilities in the world, none were in the United States, while four were in China.” It correctly captures the acute problem, which is that “[The] United States produces none of the leading-edge (under 10-nm) chips, while Taiwan accounts for 92 percent” — yet another flashpoint on the most essential place in the semiconductor supply chain. 

As C. Raja Mohan recently pointed out, finding the right balance between ideological confrontation, commercial competition, and scientific cooperation will not be easy. Because of their complexity, deeply interwoven and interdependent flows, and billion-dollar facility costs, this will be especially complicated for semiconductors. Making matters worse — as Jared Cohen and Richard Fontaine masterfully describe — is the West’s dangerous disunion, while China has become ascendent in facial and voice recognition, 5G technology, digital payments, quantum communications, and the commercial drone market, all cutting-edge technologies irreplaceably enabled by semiconductors. In their words, “the United States and its allies have stepped away from their tradition of collaboration.” 

The consequence, according to the Semiconductor Industry Association, is that Asia will “capture nearly all manufacturing growth” this decade, while the U.S. — on its current trajectory — will gain almost none. China is investing $100 billion into the sector and “funding the construction of more than sixty new semiconductor fabs” in order to have “the single-largest share of chip production by 2030.” Recently, with bipartisan support, the Senate passed the Innovation and Competition Act of 2021, which includes $52 billion over five years to “bolster domestic semiconductor manufacturing.”

How, then — without adding even more tension to our relationship with China — can we collectively respond to this lethal threat in a way that both directly addresses the challenge and better promotes the health, vitality, resilience, and vibrancy of this core industry?

Virology suggests a promising approach.

Earlier generations of semiconductors contained no ability to detect — never mind heal — anomalous function. Basically, we knew something was wrong when the device did not work. The root causes of error were assumed to be design flaws, a then-safe assumption because it is mathematically impossible to check for every combination of behaviors, and the design paradigms of the last quarter century never seriously considered that an adversary would deliberately contaminate or infect a device.

Chips that are clandestinely modified with “Trojans” can leak information (e.g., targeting coordinates), behave improperly (e.g., turn themselves off or follow inauthentic remote instructions), or be/become unreliable (e.g., erroneously compute their GPS-based position). Even though we cannot anticipate every conceivable misbehavior, industry is starting to consider how automated tools can identify vulnerabilities and threat surfaces that human designers will miss. We can model where hardware Trojans might go on a chip, what they might be able to do, and, crucially, how we would know.

The current approach to hardware cybersecurity largely falls in the realm of forensic analysis — that is, the search for and mitigation of exploits after the compromised hardware is delivered from the manufacturer, distributor, or other links in the supply chain. In contrast, one could apply predictive analysis, much like T-cells in our bodies. Progress here is borne of recent advances in machine learning and artificial intelligence, and from our already-deep understanding of the downstream impact of unexpected changes. Knowledge of these attack modes can inform new algorithms (that assert “don’t ever let this happen” rules or anticipate “what an adversary might want to do”) to expose and potentially remediate hardware security weaknesses during the design phase.

These new strategies depend on our ability to identify vulnerable points in the chip before it is built, insert our own Trojans, and observe the response. We can experiment with specialized surveillance instruments and bit-level anomaly detection. Immune system-like explorations are remarkably similar to real-life biological systems and can even evolve better chip-level defenses. The difference is that nature has had 4 billion years to experiment with alternative defenses and error correction techniques; even at the lowest technology nodes available today, there is practically nothing inside a chip that protects it.

The primary problem is that the economic incentives are misaligned, even perverse. Manufacturers have historically loathed the insertion of “overhead” in their devices, for fear of being commercially uncompetitive. Every square nanometer is devoted to enhanced functionality. Semiconductor immune systems take time to integrate in the lab, require space on the device, and consume power in the field. When an alternative supplier can provide all the (apparently) same features faster, in a smaller and more energy-efficient package, reliability and trust are unceremoniously defenestrated.

Moreover, official recognition of cyber-physical systems as a national security threat largely ignores the more-pernicious vulnerability we face in semiconductor manufacturing. Although software breaches are reported with horrifying regularity, the general public and elected officials are largely unaware of the challenge that transcends ransomware and data center hacks. The new threat — bad actors inserting Trojans directly into chips that can be activated to cause malfunction — is especially dangerous because we depend on semiconductors for both national security and the everyday services of civil society. And the only way to fix them is to replace the device, which is infeasible if it is implanted in a body and impossible if it is launched into space.

Since a huge fraction of our supply will come from Asia for at least the next decade, we need to take chip security far more seriously.

Given the inadequacy of U.S.-based manufacturers, we believe that the administration must level the competitive playing field, especially in the national security domain, and demand that suppliers provide the user-controlled ability to detect intrusion on every device and electronic system. Congress can, simultaneously, ensure that the country has an appropriate level of investment — at least in universities and government laboratories — to maintain our edge in the semiconductor software design tools that enable this level of “immune response.”

The good news is that there is bio-mimicking technology under development that can significantly reduce or eliminate this new and unprecedented vulnerability. Indeed, the country urgently needs to re-prioritize this “problem from hell” before we experience a scaled and massive attack on our infrastructure and security apparatus.

Joseph Costello was the founding CEO of Cadence Design Systems and is a laureate of the Phil Kaufman Award.

Peter L. Levin is Adjunct Senior Fellow in the Technology and National Security program at the Center for a New American Security and CEO of Amida Technology Solutions, Inc.

Tags China chip manufacturers chip shortage cybersecurity Great power competition IBM Integrated circuit microchips National security Semiconductor device fabrication Semiconductor devices Semiconductor industry Supply chain Technology Trojan Horse

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts

Main Area Bottom ↴

Top Stories

See All

Most Popular

Load more