Global threats like the coronavirus pandemic are transforming the world today. An existential truth has emerged: technological advances are outstripping political capacity and imagination. This is not a new story
• In 2020, unfamiliar technological and social conditions teeter upon ossified political structures in a moment eerily similar to the early years of the 20th century.
• In the 19th century, railroads reshaped national economies, industries, and cultures — with worldwide consequences. In Europe, rapid technological changes were embraced as indicators of progress and celebrated in tribute to the greater glory of the states themselves.
• Today, world leaders are hard pressed to comprehend the complex networks of social and technological forces that undergird the foundations of modern life. The misalignment between our ability to govern and the breakneck pace of social and technological change is growing at an alarming rate.
• The increased complexity and interconnectedness around dual-use technologies — those that can be used for both socially beneficial and military purposes — increase the risk of inadvertent military confrontation. The lights are off and the barriers to entry are not forbiddingly high.
On an unremarkable day in January just over one hundred years ago, the age of empire in Europe came to an end. The colossal states that ruled over vast, multiethnic territories with supreme self-confidence suddenly ceased to exist. Empire’s end arrived with a bang, not a whimper, to be sure. Though the Treaty of Versailles that came into effect in early 1920 redrew the map of Europe, the great monarchs sealed their own fate when they ambled unwittingly into the fires of the Great War. Their demise demonstrates the cost of miscalculation when the pace and scale of technological and social change outstrip political capacity and imagination. Once begun, the war proceeded according to a brutal logic of bloody and unexpected escalation, culminating in the destruction of the very states that had presided over the rise of modern Europe. As we reflect upon the war a century later, we may be surprised to find that the similarities between our time and that not-so-distant past are more troubling than the differences.
Over the course of the 19th century, scientific and technological progress advanced at such a pace that the governing bodies could scarcely grasp the enormity of the transformation of the very ground beneath their feet. They were lulled to complacence by their own seeming immutability. Changes within their realms were embraced as indications of progress and celebrated in tribute to the greater glory of the states themselves. Writing of the replacement of gas streetlamps with electric lighting, the novel rapidity of horseless carriages, and the newfound ability to soar aloft like Icarus, the Viennese writer Stefan Zweig recounts how “faith in an uninterrupted and irresistible ‘progress’ truly had the force of a religion for that generation. One began to believe more in this ‘progress’ than in the Bible, and its gospel appeared ultimate because of the daily new wonders of science and technology.”
Over the course of the 19th century, scientific and technological progress advanced at such a pace that the governing bodies could scarcely grasp the enormity of the transformation of the very ground beneath their feet. They were lulled to complacence by their own seeming immutability.
Technological progress in turn-of-the-century Europe may strike modern readers as quaint and innocuous. Today, after all, leading firms compete to achieve quantum supremacy in computing, political leaders darkly intone that mastery of artificial intelligence will lead to global domination, and Silicon Valley billionaires look to the stars — investing immense capital in the production of satellites and spaceships to mine the mineral wealth of asteroids.
Just as in Zweig’s Vienna, however, today’s world leaders are hard-pressed to comprehend the complex networks of social and technological forces that undergird the foundations of modern life. High up over our heads, along with the fixed satellite relays that provide instant face-to-face communication with anyone, anywhere, in real time, are concealed satellites that states rely upon to receive and transmit critical information to submarines, perform surveillance and reconnaissance, and provide early-warning monitoring for missile launches. Satellites are an example of a “dual-use” technology: that is, a technology that can be used for both socially beneficial and military purposes. In this sense, they are not dissimilar from railroads in the 19th century.
Safe at Home Sitting on the front stoop of his house, an American soldier models his gas mask, ca. 1919. First used in World War I by the Germans at the Second Battle of Ypres in 1915, chlorine gas proved an effective means of targeting enemy trenches from afar. Following the deadly Ypres attack, the London Daily Mail condemned the “cold-blooded deployment of every device of modern science,” thundering, “Devilry, Thy Name Is Germany!” Within months, Britain would attack German trenches with gas at the Battle of Loos. (Credit: Kirn Vintage Stock/Corbis via Getty Images)
Railroads spiderwebbed across the European continent in the 1800s, and in the process reshaped national economies, industries, and cultures. Their very ubiquity became a key component of German military planning — strategic surprise leading to quick victory —in the years leading up to World War I. By mobilizing and rapidly deploying thousands of troops via the railroad, imperial German strategists believed that they could deliver a knockout blow to France before turning to engage the Russian Empire on their eastern flank. Today, some scholars suggest that an overreliance on satellite and communications technology presents a similar temptation for military planners: the alluring appeal of the first strike, of a sudden and overwhelming surprise attack. Consider, for example, the confusion that would result from an unexpected strike that disabled the early warning military satellites used to detect the launch of nuclear missiles.
If history is any guide, we should take warning. When the German surprise attack on France was rebuffed on the banks of the Marne River, the deployment of modern machine guns — whose use was largely unaccounted for in 19th-century German strategy — necessitated the digging of trenches to protect troops from devastating attack. Frustration with the intransigence of trench warfare led generals to seek out advantages by modern means. Chlorine gas, newly synthesized and manufactured thanks to breakthroughs in the chemical sciences, was found to be an effective means of targeting enemy trenches from afar. Suddenly, what was to have been a very quick engagement became an epochal rupture.
James Acton, codirector of the Nuclear Policy Program at the Carnegie Endowment for International Peace, defines the potential risk of military confrontation spilling over into nuclear escalation derived from the increased complexity and interconnectedness around dual-use technologies as a problem of entanglement. Acton writes:
In a conventional conflict, if U.S. defenses were effective in intercepting Russian non-nuclear missiles fired against targets in Europe, Russia might attack U.S. early-warning satellites to blunt these defenses.
However, because such an attack would also degrade the United States’ ability to detect incoming nuclear strikes, Washington could interpret it as the prelude to a Russian nuclear attack — potentially resulting in escalation.
What differentiates risk today from that of a century ago is that entanglement may be inadvertent. The Imperial German Army of 1914 intended to utilize the relatively modern technology of railroads to launch a surprise attack. The attack failed due to miscalculation, resulting in a grim and unforeseen sequence of cascading escalations culminating in the death of 40 million people and the demise of the imperial grandeur that had occupied the European imagination for centuries. Today, such a series of events could be set in motion without the first shot being knowingly fired.
That is because, unlike railroads and train cars, there is more to satellites than meets the eye. Satellites themselves are a physical aspect of a novel digital realm made up of a myriad of well-nigh impossible to trace interrelations, connections, and dependencies. While a satellite orbiting many thousands of feet above our heads can be physically disabled, for instance by a missile or a spacecraft (a scenario some strategists worry about), it can also be hacked into remotely, monitored, disabled, or taken over by the same keyboard that can be used to attack a kitchen toaster, an electric car, a city power grid, or a polling booth. Furthermore, satellites invariably depend upon networks of other systems to receive and process the signals they send, and those systems bring with them their own risks and vulnerabilities. In other words, satellites, like office computers, airplanes, elevators, and hospital ventilators, are only as secure as the systems they depend upon. If a determined nonstate group targeted a power supply or a telecommunications network, they could unintentionally — or intentionally — blind an early-warning satellite and thereby precipitate a nuclear crisis between states.
It gets worse. Not only are cyberweapons invisible to the naked eye, but their very efficacy lies in their concealment: once an adversary becomes aware of the existence of a cyberweapon, a suitable defense can be quickly engineered and the weapon effectively neutralized. Unlike previous paradigms of warfare, the absolute emphasis on protecting the secrecy of cyber operations makes it extraordinarily difficult for competing states to develop confidence-building measures or safeguards to protect against inadvertent escalation.
In cyberwar as it is currently waged, there can be neither trust nor veracity. Rules of the road are figured out on the fly, in combat, in the dark.
Nuclear arms control, for example, depends upon the willing disclosure of military assets in order to function effectively, enhancing mutual understanding of each party’s capabilities and intentions. The Open Skies Treaty, currently in jeopardy of falling victim to mistrust, allows states to conduct regular surveillance flights over adversarial territory to observe troop movements and weapons arsenals for themselves. It was precisely this ability to inspect the activity of treaty partners that ushered in an age of arms control and cautious good will, informed by Ronald Reagan’s pithy formula: “Trust, but verify.”
In cyberwar as it is currently waged, there can be neither trust nor veracity. Rules of the road are figured out on the fly, in combat, in the dark. To operate in this mercurial arena, the United States has adopted a policy of “persistent engagement.” Achieve and Maintain Cyberspace Superiority describes cyberspace as a “fluid environment of constant contact and shifting terrain,” wherein the “constant innovation of disruptive technologies offers all actors new opportunities for exploitation.” This April 2018 “roadmap” for U.S. Cyber Command (USCYBERCOM) states that “the United States must increase resiliency, defend forward as close as possible to the origin of adversary activity, and persistently contest malicious cyberspace actors to generate continuous tactical, operational, and strategic advantage.”
Picture a cat’s cradle strung with thermonuclear trip wires and threaded between the fingers of a number of rivals, each of whom actively seeks to undermine and attack the others. The lights are off and the barriers to entry are not forbiddingly high. Any party with sufficient programming expertise and computing capacity can enter the arena and pick up a thread. Apart from its piquancy, the image suggests a deeper level of uncertainty below the technical. Beyond the tangle of tripwires, the complexity and risk of the predicament is compounded by the variety of psychologies at play. Quite apart from understanding which string could lead to which effect, there is a lack of understanding of how individual players might interpret any specific action.
In a 2016 report that sought to find common ground between the United States and Russia in regards to cybersecurity, Harvard’s Working Group on the Future of U.S.-Russia Relations began by noting that the two rivals do not even use the same terminology to describe the threat: “Russia emphasizes ‘international information security,’ whereas the United States believes that cybercrime, cyberespionage, and cyberterrorism are the main threats in this domain and so prefers the term ‘cybersecurity’ and a focus on the protection of computer networks and resources.” The prescient report went on to highlight a troubling concern: rising consternation in the Kremlin that its dependence on a global system of interconnected computer networks administered from outside its borders was a threat to its sovereignty, and that the country had begun to seek methods to protect itself, including decoupling from the Internet altogether. Four years later, just such a decoupling appears to be coming to pass.
While some challenges can be addressed with technocratic solutions, others are rooted in pathologies more nebulous and difficult to parse. According to the late Cambridge historian C. A. Bayly, it is the latter that powers the centrifuge of history. While discussing the “motors of change” in the 19th and 20th centuries, Bayly identified war as a principal driver, but argued that as a frame of analysis, its purchase was limited. Where, after all, does war come from? Surveying the 20th century, he observed that while warfare both fueled and was fueled by the demand for economic growth and expansion, the direction of conflict itself was provided by national and extranational identities. “Cecil Rhodes’s career in southern Africa, or the project of building the Berlin–Baghdad or Trans-Siberian railways, were ultimately directed by states or political actors attempting to ensure [not only] their wealth, but also their identity.” In the thaw of the Cold War, the twin energies of globalization and the rise of the Internet compressed time and space, bringing the pressures of wealth acquisition and identity to a head as never before. Today, the example par excellence of Bayly’s insight may be found in the global struggle over Huawei, the Chinese government–backed telecommunications company.
Inside Huawei, China’s Tech Giant A thermal engineer performs a heat test in the research and development area of Huawei’s Bantian campus, Shenzhen, China, as captured in a photo-essay published in U.S. News & World Report (April 12, 2019). “While commercially successful and a dominant player in 5G, or fifth-generation networking technology,” U.S. News writes, “Huawei has faced political headwinds and allegations that its equipment includes so-called backdoors that the U.S. government perceives as a national security threat.” (Credit: Kevin Frayer/Getty Images)
The determination with which the United States has sought to deter its allies from purchasing Huawei’s communications infrastructure speaks to its recognition that the contours of commerce and social engagement in the 21st century will be determined by the computer code that routes them. In the succinct formulation of Harvard’s Lawrence Lessig, “code is law.”* In the coming decades, as more and more physical commodities and social processes come online, that code and network will become a broadening tributary channeling an ever-increasing supply of human activity: shoes, refrigerators, thermostats, but also Internet browsing and chat functions, archival access, and — not least — telemedicine, logistics planning, taxation, energy, and voting. To handle the sheer increase in web traffic volume resulting from such a boom, we will require network and communication services with greatly increased capacity. As of 2020, due to underinvestment, there is no credible Western alternative to Huawei, whose rise and adoption across broad swaths of Asia and Africa, and now Europe, has been subsidized as a national priority project of the People’s Republic of China.
As the enormous transformations taking place in the late 19th and early 20th centuries disrupted social norms and generated novel political demands, declining landowning and military elites were unable to adapt to the changing circumstances. Dismayed by an emerging world in which their stature was not guaranteed, the ancien régime — ranging from German junkers and Russian nobles to British and French aristocrats — sought in vain to manage popular social movements with nationalistic rhetoric and, ultimately, conscription. We should take care to heed the lack of political imagination to conceive of or keep up with the massive changes underway. The misalignment between our ability to govern and the breakneck pace of social and technological change grows at an alarming rate. We agitate about immigration, as if a wall could keep out a pandemic. We lavish ever greater fortunes upon our militaries, while the U.S. military is one of the single greatest carbon emitters on this planet. We undermine and revoke stabilizing international treaties, as reality dissolves into quanta before our eyes. Entanglements multiply by the inexorable progress of technological and scientific innovation. Machine learning, lethal autonomous drone swarms, artificial intelligence, and quantum computing crowd a dark horizon. A besieged climate will continue to spark conflagrations and catalyze social, economic, and political unrest. Unfamiliar technological and social conditions teeter upon ossified political structures in a moment eerily similar to the early years of the 20th century. The time has come to pinch ourselves and ask if we are dreaming. Were a misstep to wake us, we might long for the days of horseless carriages, flying machines, and the “dim street lights of former times.”
*For the website Above the Law (August 12, 2019), Olga V. Mack provided some context for this famous dictum: “[W]hen Lessig first used the phrase, he didn’t have in mind its contemporary usage. Lessig doesn’t argue that if software code permits an action, it is necessarily allowed. And he definitely doesn’t argue that software will replace law.” Rather, Mack explains, “when he wrote that ‘code is law,’ Lessig was arguing that the Internet should incorporate constitutional principles. Lessig astutely observed early on that the software that underlies the very architecture and infrastructure of the Internet governs it as a whole. But who decides what the rules of code are? Who are the architects behind these code-based structures? There is an obvious and troublesome lack of transparency.”