Artificial Intelligence, Strategic Stability And Nuclear Risk

3y ago
34 Views
2 Downloads
2.32 MB
158 Pages
Last View : 7d ago
Last Download : 3m ago
Upload by : Camryn Boren
Transcription

SIPRIPolicy PaperARTIFICIAL INTELLIGENCE,STRATEGIC STABILITYAND NUCLEAR RISKvincent boulanin, lora saalman,petr topychkanov, fei su andmoa peldán carlssonJune 2020

STOCKHOLM INTERNATIONALPEACE RESEARCH INSTITUTESIPRI is an independent international institute dedicated toresearch into conflict, armaments, arms control and disarmament.Established in 1966, SIPRI provides data, analysis andrecommendations, based on open sources, to policymakers,researchers, media and the interested public.The Governing Board is not responsible for the views expressed inthe publications of the Institute.GOVERNING BOARDAmbassador Jan Eliasson, Chair (Sweden)Dr Vladimir Baranovsky (Russia)Espen Barth Eide (Norway)Jean-Marie Guéhenno (France)Dr Radha Kumar (India)Ambassador Ramtane Lamamra (Algeria)Dr Patricia Lewis (Ireland/United Kingdom)Dr Jessica Tuchman Mathews (United States)DIRECTORDan Smith (United Kingdom)Signalistgatan 9SE-169 72 Solna, SwedenTelephone: 46 8 655 9700Email: sipri@sipri.orgInternet: www.sipri.org

Artificial Intelligence,Strategic Stabilityand Nuclear Riskvincent boulanin, lora saalman,petr topychkanov, fei su andmoa peldán carlssonJune 2020

ive Summaryvviviiix1. IntroductionBox 1.1. Key definitions162. Understanding the AI renaissance and its impact on nuclear weaponsand related systemsI. Understanding the AI renaissanceII. AI and nuclear weapon systems: Past, present and futureBox 2.1. Automatic, automated, autonomous: The relationship betweenautomation, autonomy and machine learningBox 2.2. Historical cases of false alarms in early warning systemsBox 2.3. Dead Hand and PerimetrFigure 2.1. A brief history of artificial intelligenceFigure 2.2. The benefits of machine learningFigure 2.3. Approaches to the definition and categorization of autonomoussystemsFigure 2.4. Benefits of autonomyFigure 2.5. Foreseeable applications of AI in nuclear deterrence73. AI and the military modernization plans of nuclear-armed statesI. The United StatesII. RussiaIII. The United KingdomIV. FranceV. ChinaVI. IndiaVII. PakistanVIII. North KoreaBox 3.1. The artificial intelligence raceFigure 3.1. Recent policy developments related to artificial intelligence inthe United StatesFigure 3.2. Recent policy developments related to artificial intelligence inRussiaFigure 3.3. Recent policy developments related to artificial intelligence inthe United KingdomFigure 3.4. Recent policy developments related to artificial intelligence inFranceFigure 3.5. Recent policy developments related to artificial intelligence 536068

Figure 3.6. Recent policy developments related to artificial intelligence inIndiaFigure 3.7. Recent policy developments related to artificial intelligence inPakistanTable 3.1. Applications of artificial intelligence of interest to the US Departmentof DefenseTable 3.2. State of adoption of artificial intelligence in the United Statesnuclear deterrence architectureTable 3.3. State of adoption of artificial intelligence in the Russian nucleardeterrence architectureTable 3.4. State of adoption of artificial intelligence in the British nucleardeterrence architectureTable 3.5. State of adoption of artificial intelligence in the French nucleardeterrence architectureTable 3.6. State of adoption of artificial intelligence in the Chinese nucleardeterrence architectureTable 3.7. State of adoption of artificial intelligence in the Indian nucleardeterrence architectureTable 3.8. State of adoption of artificial intelligence in the Pakistani nucleardeterrence architectureTable 3.9. North Korean universities conducting research and studies inartificial intelligenceTable 3.10. State of adoption of artificial intelligence in the North Koreannuclear deterrence architecture4. The positive and negative impacts of AI on strategic stability andnuclear riskI. The impact on strategic stability and strategic relationsII. The impact on the likelihood of nuclear conflict: Foreseeable risk scenariosBox 4.1. Machine learning and verification of nuclear arms control anddisarmament: Opportunities and challenges7988384250586677859296991011011131045. Mitigating the negative impacts of AI on strategic stability andnuclear riskI. Mitigating risks: What, how and whereII. Possible technical and organizational measures for risk reductionIII. Possible policy measures for risk reductionFigure 5.1. Risks and challenges posed by the use of artificial intelligencein nuclear weapons1236. ConclusionsI. Key findingsII. RecommendationsFigure 6.1. Possible risk reduction measures and how they can be implementedFigure 6.2. Four key measures to deal with the negative impact of AI onstrategic stability and nuclear risk136136140142143About the authors144123127130124

PrefaceThe current period sees the post-cold war global strategic landscape in anextended process of redefinition. This is the result of a number of different trends.Most importantly, the underlying dynamics of world power have been shiftingwith the economic, political and military rise of China, the reassertion underPresident Vladimir Putin of a great power role for Russia, and the dis enchantmentexpressed by the current United States administration with the intern ationalinstitutions and arrangements that the USA itself had a big hand in creating.As a result, the China–US rivalry has increasingly supplanted the Russian–USnuclear rivalry as the core binary confrontation of international politics. This pairof dyadic antagonisms is, moreover, supplemented by growing regional nuclearrivalries and strategic triangles in South Asia and the Middle East.Against this increasingly toxic geopolitical background, the arms controlframework created at the end of the cold war has deteriorated. Today, thecommitment of the states with the largest nuclear arsenals to pursue stabilitythrough arms control and potentially disarmament is in doubt. The impact ofcoronavirus disease 2019 (COVID-19) is not yet clear but may well be a source offurther unsettling developments.All of this is the volatile backdrop to considering the consequences of newtechnological developments for armament dynamics. The world is going througha fourth industrial revolution, characterized by rapid advances in artificialintelligence (AI), robotics, quantum technology, nanotechnology, bio technologyand digital fabrication. The question of how these technologies will be used hasnot yet been answered in full detail. It is beyond dispute, however, that nucleararmed states will seek to use these technologies for their national security.The SIPRI project ‘Mapping the impact of machine learning and autonomyon strategic stability’ set out to explore the potential effect of AI exploitation onstrategic stability and nuclear risk. The research team has used a region-by-regionapproach to analyze the impact that the exploitation of AI could have on the globalstrategic landscape. This report is the final publication of this two-year researchproject funded by the Carnegie Corporation of New York; it presents the keyfindings and recommendations of the SIPRI authors derived from their researchas well as a series of regional and transregional workshop organized in Europe,East and South Asia and the USA. It follows and complements the trilogy of editedvolumes that compile the perspectives of experts from these regions on the topic.SIPRI commends this study to decision makers in the realms of arms control,defence and foreign affairs, to researchers and students in departments of politics,international relations and computer science, as well as to members of the generalpublic who have a professional and personal interest in the subject.Dan SmithDirector, SIPRIStockholm, June 2020

AcknowledgementsThe authors would like to express their sincere gratitude to the CarnegieCorporation of New York for its generous financial support of the project. Theyare also indebted to all the experts who participated in the workshops and otherevents that SIPRI organized in Stockholm, Beijing, Colombo, New York, Genevaand Seoul. The content of this report reflects the contributions of this inter national group of experts.The authors also wish to thank the external reviewer, Erin Dumbacher, as wellas SIPRI colleagues Sibylle Bauer, Mark Bromley, Tytti Erästö, Shannon Kile,Luc van de Goor, Pieter Wezeman and Siemon Wezeman for their com prehensiveand constructive feedback. Finally, we would like to acknowledge the invaluableeditorial work of David Cruickshank and the SIPRI editorial department.Responsibility for the views and information presented in this report liesentirely with the authors.

AbbreviationsA2/AD Anti-access/area-denialAGI Artificial general intelligenceAI Artificial intelligenceATR Automatic target recognitionAURA Autonomous Unmanned Research AircraftCAIR Centre for Artificial Intelligence and RoboticsCBM Confidence-building measureCCW Certain Conventional Weapons ConventionCD Conference on DisarmamentDARPA Defense Advanced Research Projects AgencyDBN Deep belief networkDCDC Development, Concepts and Doctrine CentreDGA Direction générale de l’armement (directorate general ofarmaments of France)DIU Defense Innovation UnitDOD Department of DefenseDRDO Defence Research and Development OrganisationGAN Generative adversarial networkICBM Intercontinental ballistic missileICT Information and communications technologyISR Intelligence, surveillance and reconnaissanceIT Information technologyJAIC Joint Artificial Intelligence CenterLAWS Lethal autonomous weapon systemsMAF Ministry of the Armed ForcesMOCI Ministry of Commerce and IndustryMOD Ministry of DefenceNATO North Atlantic Treaty OrganizationNC3 Nuclear command, control and communicationsNew START Treaty on Measures for the Further Reduction and Limitationof Strategic Offensive ArmsNFU No-first-useNPT Non-Proliferation TreatyPIAIC Presidential Initiative for Artificial Intelligence and ComputingR&D Research and developmentRGB Reconnaissance General BureauSCCSS Strategic Command and Control Support SystemSLBM Submarine-launched ballistic missileSSBN Nuclear-powered ballistic missile submarineUAV Unmanned aerial vehicleUCAV Unmanned combat aerial vehicleUN United Nations

USAFUSVUUVWMDXLUUVUnited States Air ForceUnmanned surface vehicleUnmanned underwater vehicleWeapon of mass destructionExtra-large unmanned underwater vehicle

Executive SummaryThe world is undergoing a fourth industrial revolution that is char acterizedby rapid and converging advances in many technological areas. Few of thesetechnologies are expected to have as profound an impact on relations amongnuclear-armed states as artificial intelligence (AI). While the field of AI has beenaround since the 1950s, it has experienced a renaissance since the beginning ofthe 2010s. The recent advances in AI could have an impact on the field of nuclearweapons and posture, with consequences for strategic stability and nuclear riskreduction. Nuclear-armed states and international organizations must thusconsider a spectrum of options to deal with the challenges generated by AI.The two major technological developments of the current AI renaissanceare machine learning and autonomy. Machine learning—at the core of therenaissance—is an approach to software programming that now enables thedevelopment of increasingly capable AI applications. Autonomy—a key by-productof the renaissance—refers to the ability of a machine to execute tasks withouthuman input, using interactions of computer programming with the environ ment. Autonomous systems have been around for a long time, but recent advancesin machine learning have made them more sophisticated and useful.From a technical perspective, it is beyond dispute that the AI renaissance willhave an impact on nuclear weapons and postures. Advances in machine learningand autonomy could unlock new and varied possibilities for a wide array of nuclearforce-related capabilities, ranging from early warning to command and controland weapon delivery. The key question is therefore not if, but when, how and bywhom these recent advances of AI will be adopted in nuclear force archi tectures.However, these technological developments are still only a few years old and littledetailed information is available in official sources about how nuclear-armedstates see the role of AI in their nuclear force development or modernization plans.Nonetheless, there is already clear evidence that all nuclear-armed states havetaken notice of the AI renaissance and have made the pursuit of AI a priority.The ability to harness the recent advances in AI is typically presented as anessential enabler of national and military power in the years to come. AI is alsosystematically presented as a stake in the great power competition, and officialsources show that nuclear-armed states are determined to be world leaders in thisfield. In this context, while it is too early to determine the net effect of recentadvances in AI on strategic stability and nuclear risk, some informed specu lationis possible.Given the typical way in which military technology is adopted, the incorpor ationof AI into nuclear weapon systems is likely to be slow and steady. This develop ment could have both stabilizing and destabilizing effects on strategic stability,depending on the country and the regional context. For example, if a domin a ntpower were to use AI to enhance its nuclear force structure, this could furthercompromise the deterrence capability of a weaker country, weakening strategicstability. Alternatively, if the weaker power is able to harness AI to improve its

x ai, str ategic stability and nuclear riskown nuclear forces, then it may be able to redress existing asymmetries, therebyenhancing mutual vulnerability and strategic stability. In cases of multi lateralnuclear deterrence relations, calculations of AI-driven strategic stability becomeeven more complex.At the same time, advances in AI can have an impact on strategic stabilityrelations among nuclear-armed states even before they are fully developed, muchless deployed. For example, a state may perceive that an adversary’s invest mentin AI, even non-nuclear-related, could give that adversary the ability to threatenthe state’s future second-strike capability. This could be sufficient to generateinsecurity and lead that state to adopt measures that could decrease strategicstability and increase the risk of a nuclear conflict.Throughout these and other scenarios, AI could fail or be misused in ways thatcould trigger an accidental or inadvertent escalation of a crisis or conflict intoa nuclear conflict. However, for these scenarios to become reality, a number ofdestabilizing dynamics would need to align. In the current geopolitical context, itis hard to imagine how AI technology alone could be the determining trigger fornuclear weapon use. Geopolitical tensions, lack of communication and inadequatesignalling of intentions are all variables that would play an equally important ifnot greater role than AI technology in triggering an escalation of crisis or conflictto the nuclear level.While it might be hard to predict the exact impact that AI may have, it is not tooearly to start discussing options that nuclear-armed states and the inter nationalsecurity community could explore to prevent and mitigate the risks that militaryapplications of AI, including nuclear weapon systems, pose to peace and stability.Some solutions already exist. Existing arms control instruments include a numberof proven technical, organizational and policy measures that could be dis cussedand implemented, unilaterally, bilaterally or multilaterally.However, political pragmatism is required to determine which measuresand adoption processes will be adequate, implementable and effective. Themain challenge is that the political and institutional conditions required fora constructive discussion among nuclear-armed states on arms control haveworsened dramatically in recent years, while the conversation on AI-related risksis still new and speculative.In this light, states and international organizations should take a numberof measures—sequentially or simultaneously—to deal pragmatically with thestrategic challenges that AI raises. One measure would be to support awarenessraising measures that will help the relevant stakeholders—governmentalpractitioners, industry and civil society, among others—gain a realistic sense ofthe challenges posed by AI in the nuclear arena. Another measure would be tosupport transparency and confidence-building measures that can help to reducemisperception and misunderstanding among nuclear-armed states on AI-relatedissues. An additional measure would be to support collaborative resolution ofthe challenges posed by AI and the exploration of beneficial use of AI for armscontrol. A final possible measure would be to discuss and agree on concrete limitsto the use of AI in nuclear forces.

1. IntroductionThe nuclear order that was inherited from the cold war is under great stress as itspolitical, institutional, geopolitical and technological foundations are being calledinto question in unprecedented ways.The political will of the Russian Federation and the United States—the stateswith the world’s largest nuclear arsenals—to pursue stability through arms controland disarmament seems to have diminished. In line with its 2018 Nuclear PostureReview, in early 2020 the USA announced the deployment of submarine-launchedballistic missiles (SLBMs) with low-yield nuclear warheads.1 Russia is investingin new strategic weapons, reportedly including the Poseidon, a nuclear-capable,nuclear-powered unmanned underwater vehicle (UUV), and the Burevestnik, anuclear-powered long-range cruise missile. 2Accompanying these shifts in doctrine and armaments, the arms controlframework created by the Soviet Union and the USA during the cold war isdisintegrating. In August 2019 the USA withdrew from the 1987 Treaty on theElimination of Intermediate-Range and Shorter-Range Missiles (INF Treaty)following years of deadlock over alleged Russian non-compliance. 3 The 2010Russian–US Strategic Arms Reduction Treaty (New START) will also expirein 2021 unless both parties agree to extend or replace it. 4 However, neither theUSA nor Russia has officially begun to negotiate deeper nuclear arms reductionsbeyond the levels of New START.The global strategic landscape has also changed with an expansion in thenumber of declared nuclear-armed states over the past quarter of a century toinclude India, Pakistan and the Democratic People’s Republic of Korea (DPRK,or North Korea). 5 The nuclear order is increasingly multipolar. 6 The East versusWest nuclear binary that characterized the cold war has now been com plicatedby regional nuclear rivalries and even strategic triangles, such as that betweenChina, Russia and the USA.7 Moreover, China, India, Pakistan are continuing to1 US Department of Defense (DOD), Nuclear Posture Review (DOD: Washington, DC, Feb. 2018),pp. 54–55; and US Department of Defense, ‘Statement on the fielding of the W76-2 low-yield submarinelaunched ballistic missile warhead’, 4 Feb. 2020.2 TASS, ‘Key stage of Poseidon underwater drone trials completed, says Putin’, 2 Feb. 2019; ‘Russiabegins testing of “Poseidon” underwater nuclear drone’, PressTV, 26 Dec. 2018; and Ramm, A., [Winged‘Burevestnik’: What is known about Russia’s secret weapon], Izvestia, 5 Mar. 2019 (in Russian).See also Hwang, I. and Kim, J., ‘The environmental impact of nuclear-powered autonomous weapons’,ed. L. Saalman, The Impact of Artificial Intelligence on Strategic Stability and Nuclear Risk, vol. II, East AsianPerspectives (SIPRI: Stockholm, Oct. 2019), pp. 86–90.5 Sood, R. (ed), Nuclear Order in the Twenty-First Century (Observer Research Foundation: New Delhi,2019).6 Legvold, R., ‘The challenges of a multipol

ARTIFICIAL INTELLIGENCE, STRATEGIC STABILITY AND NUCLEAR RISK vincent boulanin, lora saalman, petr topychkanov, fei su and moa peldán carlsson June 2020. STOCKHOLM INTERNATIONAL PEACE RESEARCH INSTITUTE SIPRI is an independent international institute dedicated to research into conflict, armaments, arms control and disarmament. Established in 1966, SIPRI provides data, analysis and .

Related Documents:

Artificial Intelligence -a brief introduction Project Management and Artificial Intelligence -Beyond human imagination! November 2018 7 Artificial Intelligence Applications Artificial Intelligence is the ability of a system to perform tasks through intelligent deduction, when provided with an abstract set of information.

and artificial intelligence expert, joined Ernst & Young as the person in charge of its global innovative artificial intelligence team. In recent years, many countries have been competing to carry out research and application of artificial intelli-gence, and the call for he use of artificial

Artificial Intelligence and Its Military Implications China Arms Control and Disarmament Association July 2019 What Is Artificial Intelligence? Artificial intelligence (AI) refers to the research and development of the theories, methods, technologies, and application systems for

BCS Foundation Certificate in Artificial Intelligence V1.1 Oct 2020 Syllabus Learning Objectives 1. Ethical and Sustainable Human and Artificial Intelligence (20%) Candidates will be able to: 1.1. Recall the general definition of Human and Artificial Intelligence (AI). 1.1.1. Describe the concept of intelligent agents. 1.1.2. Describe a modern .

IN ARTIFICIAL INTELLIGENCE Stuart Russell and Peter Norvig, Editors FORSYTH & PONCE Computer Vision: A Modern Approach GRAHAM ANSI Common Lisp JURAFSKY & MARTIN Speech and Language Processing, 2nd ed. NEAPOLITAN Learning Bayesian Networks RUSSELL & NORVIG Artificial Intelligence: A Modern Approach, 3rd ed. Artificial Intelligence A Modern Approach Third Edition Stuart J. Russell and Peter .

1 Introduction - Artificial Intelligence Artificial intelligence currently represents one of the fastest growing fields. According to the 2019 CIO Survey conducted by Gartner, Inc., one of the leading research and advisory companies, the percentage of enterprises implementing artificial intelligence grew 270 percent in the past four years.

BCS Essentials Certificate in Artificial Intelligence Syllabus V1.0 BCS 2018 Page 10 of 16 Recommended Reading List Artificial Intelligence and Consciousness Title Artificial Intelligence, A Modern Approach, 3rd Edition Author Stuart Russell and Peter Norvig, Publication Date 2016, ISBN 10 1292153962

PA R T 1 Introduction to Artificial Intelligence 1 Chapter 1 A Brief History of Artificial Intelligence 3 1.1 Introduction 3 1.2 What Is Artificial Intelligence? 4 1.3 Strong Methods and Weak Methods 5 1.4 From Aristotle to Babbage 6 1.5 Alan Turing and the 1950s 7 1.6 The 1960s to the 1990s 9 1.7 Philosophy 10 1.8 Linguistics 11