The Militarization Of Artificial Intelligence

3y ago
55 Views
2 Downloads
1,014.00 KB
32 Pages
Last View : 20d ago
Last Download : 3m ago
Upload by : Kairi Hasson
Transcription

The Militarizationof Artificial IntelligenceAugust 2019 United Nations, New York, NY

ForewordWe are pleased to present to you this workshop summary and theassociated discussion papers on The Militarization of ArtificialIntelligence.In his agenda for disarmament, Securing Our Common Future,United Nations Secretary-General António Guterres stated that,“Arms control has always been motivated by the need to keepahead of the challenges to peace and security raised by scienceand technology” and emerging means and methods of warfare.While revolutionary technologies hold much promise forhumanity, when taken up for military uses they can pose risksfor international peace and security. The challenge is to buildunderstanding among stakeholders about a technology anddevelop responsive solutions to mitigate such risks.That is where we might be today with military applications ofartificial intelligence (AI).There can be little doubt that AI has potential uses that couldimprove the health and well-being of individuals, communities,and states, and help meet the UN’Sustainable Development Goals.However, certain uses of AI could undermine international peaceand security if they raise safety concerns, accelerate conflicts, orloosen human control over the means of war.These papers emerge from a series of discussions coconvenedby the UN Office for Disarmament Affairs, the Stanley Centerfor Peace and Security, and the Stimson Center. It was madepossible through a generous contribution by the governmentof Switzerland. The organizers are particularly indebted toReto Wollenmann and Beatrice Müller of the Swiss Departmentof Foreign Affairs for their thought leadership and guidancethroughout the project. We are grateful to Jennifer Spindel,Paul Scharre, Vadim Kozyulin, and colleagues at the China ArmsControl and Disarmament Association for their discussion papers.We also thank those experts who participated in the workshop fortheir thoughtful presentations and contributions.A unique feature of this project was its multistakeholdercomposition, acknowledging the growing importance, inparticular, of tech firms to security discussions. We hope thisprovides a starting point for more robust dialogues not justamong governments but also industry and research institutions,as stakeholders endeavor to maximize the benefits of AI whilemitigating the misapplication of this important technology.Brian Finlay President and CEO, Stimson CenterBenjamin Loehrke Program Officer, Stanley Center for Peace and SecurityChris King Senior Political Affairs Officer, UN Office of Disarmament Affairs2Stanley Center for Peace and Security

Workshop SummaryMultistakeholder Perspectives onthe Potential Benefits, Risks, andGovernance Options for MilitaryApplications of Artificial IntelligenceMelanie Sisson Defense Strategy and Planning Program, Stimson CenterFew developments in science and technology hold as muchpromise for the future of humanity as the suite of computerscience-enabled capabilities that falls under the umbrella ofartificial intelligence (AI). AI has the potential to contribute tothe health and well-being of individuals, communities, and states,as well as to aid fulfillment of the United Nations’ 2030 agendafor Sustainable Development Goals. As with past revolutionarytechnologies, however, AI applications could affect internationalpeace and security, especially through their integration into thetools and systems of national militaries.In recognition of this, UN Secretary-General António Guterres,in his agenda for disarmament, Securing Our Common Future,stresses the need for UN member states to better understandthe nature and implications of new and emerging technologieswith potential military applications and the need to maintainhuman control over weapons systems. He emphasizes thatdialogue among governments, civil society, and the privatesector is an increasingly necessary complement to existingintergovernmental processes.Such an approach is particularly relevant for AI, which, as anenabling technology, is likely to be integrated into a broad arrayof military applications but is largely being developed by privatesector entities or academic institutions for different, mostlycivilian, purposes.To facilitate a conversation between disparate stakeholders onthis topic, the UN Office for Disarmament Affairs, the StimsonCenter, and the Stanley Center for Peace and Security convenedan initial dialogue on the intersection of AI and national militarycapabilities. Over two days at UN headquarters in New York,experts from member states, industry, academia, and researchinstitutions participated in a workshop on The Militarization ofArtificial Intelligence.Discussion within the workshop was candid and revealed that theimplications for international peace and security of AI’s integrationinto national militaries remains to a large extent unclear.Consequently, uncertainty about the domains in which and thepurposes for which AI will be used by national militaries posespractical challenges to the design of governance mechanisms.This uncertainty generates fear and heightens perceptions of risk.These dynamics reflect the early stage of discourse on militaryapplications of AI and reinforce the need for active and consistentengagement.Workshop participants acknowledged and were mindful of theneed for precision when referring to the large body of toolscompressed into the term “AI,” most notably by distinguishingbetween machine-assisted decision making and machineautonomy. The result was a rich discussion that identified threetopical areas in need of ongoing learning and dialogue amongmember states and other stakeholders:–Potential Risks of Military Applications of AI: Thereundoubtedly are risks posed by applications of AI within themilitary domain; it is important, however, to not be alarmistin addressing these potential challenges.–Potential Benefits of Military Application of AI: There is aneed to consider more fully the potential positive applicationsof AI within the military domain and to develop state-leveland multilateral means of capturing these benefits safely.Report3

–Potential Governance of Military Applications of AI: Thereare considerable challenges to international governanceposed by these emergent technologies, and the primary workof stakeholders will be to devise constructs that balance thetradeoffs made between innovation, capturing the positiveeffects of AI, and mitigating or eliminating the risks ofmilitary AI.Potential Risks of Military Applicationsof Artificial IntelligenceThe risks of introducing artificial intelligence into nationalmilitaries are not small. Lethal autonomous weapon systems(LAWS) receive popular attention because such systems are easilyimagined and raise important security, legal, philosophical, andethical questions. Workshop participants, however, identifiedmultiple other risks from military applications of AI that posechallenges to international peace and security.Militaries are likely to use AI to assist with decision making.This may be through providing information to humans as theymake decisions, or even by taking over the entire execution ofdecision-making processes. This may happen, for example,in communications-denied environments or in environmentssuch as cyberspace, in which action happens at speeds beyondhuman cognition. While this may improve a human operator’sor commander’s ability to exercise direct command and controlover military systems, it could also have the opposite effect. AIaffords the construction of complex systems that can be difficultto understand, creating problems of transparency and of knowingwhether the system is performing as expected or intended. Wheretransparency is sufficiently prioritized in AI design, this concerncan be reduced. Where it is not, it becomes possible that errorsin AI systems will go unseen—whether such errors are accidentalor caused deliberately by outside parties using techniques likehacking or data poisoning.Participants debated whether AI can be used effectively tohack, distort, or corrupt the functions of command-and-controlstructures, including early warning systems for nuclear weapons.Specific note was made, however, that the integration of multipleAI-enabled systems could make it harder to identify commandand-control malfunctions. Such integration is a likely directionfor advancement in military applications of AI.Participants also discussed how advances in AI interact withhuman trust in the machine-based systems they use. Increasingcomplexity could make AI systems harder to understand and,therefore, encourage the use of trust rather than transparency.Increased trust means that errors and failures are even less likelyto be detected.The concern was also expressed that the desire for—or fear ofanother’s—decision-making speed may contribute to actingquickly on information aggregated and presented by AI. Thispressure can increase the likelihood that decision makers4Stanley Center for Peace and Securitywill be prone to known automation biases, including rejectionof contradictory or surprising information. So too might theaddition of speed create pressures that work against cautionand deliberation, with leaders fearing the consequences of delay.Speed can be especially destabilizing in combat, where increasesin pace ultimately could surpass the human ability to understand,process, and act on information. This mismatch between AI speedand cognition could degrade human control over events andincrease the destructiveness of violent conflict.Although participants worry about the potential for lone actorsto use AI-enabled tools, these concerns are moderated by theirinability to apply them at large scale. More problematic toparticipants is the potential for national-level arms racing. Thepotential ill effects of AI arms racing are threefold. First, armsrace dynamics have in the past led to high levels of governmentspending that were poorly prioritized and inefficient. Second,arms racing can generate an insecurity spiral, with actorsperceiving others’ pursuit of new capabilities as threatening.Third, the development of AI tools for use by national militaries isin a discovery phase, with government and industry alike workingto find areas for useful application. Competition at the industryand state levels might, therefore, incentivize fast deployment ofnew and potentially insufficiently tested capabilities, as well ashiding of national AI priorities and progress. These characteristicsof arms racing—high rates of investment, a lack of transparency,mutual suspicion and fear, and a perceived incentive to deployfirst—heighten the risk of avoidable or accidental conflict.Potential Benefits of Military Applicationsof Artificial IntelligenceFor national militaries, AI has broad potential beyond weaponssystems. Often referred to as a tool for jobs that are “dull, dirty,and dangerous,” AI applications offer a means to avoid puttinghuman lives at risk or assigning humans to tasks that do notrequire the creativity of the human brain. AI systems also have thepotential to reduce costs in logistics and sensing and to enhancecommunication and transparency in complex systems, if that isprioritized as a design value. In particular, as an informationcommunication technology, AI might benefit the peacekeepingagenda by more effectively communicating the capacities andmotivations of military actors.Workshop participants noted that AI-enabled systems andplatforms have already made remarkable and importantenhancements to national intelligence, surveillance, andreconnaissance capabilities. The ability of AI to support capturing,processing, storing, and analyzing visual and digital data hasincreased the quantity, quality, and accuracy of informationavailable to decision makers. They can use this informationto do everything from optimizing equipment maintenance tominimizing civilian harm. Additionally, these platforms allow fordata capture in environments that are inaccessible to humans.

Participants shared broad agreement that the benefits of militaryapplications of AI will require governments and the private sectorto collaborate frequently and in depth. Specifically, participantsadvocated for the identification of practices and norms thatensure the safety of innovation in AI, especially in the testingand deployment phases. Examples include industry-level bestpractices in programming, industry and government use of testprotocols, and government transparency and communicationabout new AI-based military capabilities.Agreement also emerged over the need for better and morecomprehensive training among technologists, policymakers, andmilitary personnel. Participants expressed clearly that managingthe risks of AI will require technical specialists to have a betterunderstanding of international relations and of the policymakingcontext. Effective policymaking and responsible use will alsorequire government and military officials to have some knowledgeof how AI systems work, their strengths, their possibilities, andtheir vulnerabilities. Practical recommendations for moving in thisdirection included the development of common terms for use inindustry, government, and multilateral discourse, and includingthe private sector in weapons-review committees.Potential Governance ofMilitary Applications of AIThe primary challenge to multilateral governance of military AIis uncertainty—about the ways AI will be applied, about whethercurrent international law adequately captures the problems thatuse of AI might generate, and about the proper venues throughwhich to advance the development of governance approaches formilitary applications of AI. These characteristics of military AIare amplified by the technology’s rapid rate of change and by theabsence of standard and accepted definitions. Even fundamentalconcepts like autonomy are open to interpretation, makinglegislation and communication difficult.There was skepticism among some, though not all, participantsthat current international law is sufficient to govern everypossible aspect of the military applications of AI. Those concernedabout the extent to which today’s governance mechanismsare sufficient noted that there are specific characteristics ofmilitary applications of AI that may fit poorly into standingregimes—for example, international humanitarian law—or forwhich applying standing regimes may produce unintendedconsequences. This observation led to general agreement amongparticipants that many governance approaches—including selfregulation, transparency and confidence-building measures, andintergovernmental approaches—ultimately would be requiredto mitigate the risks of military applications of AI. It shouldbe noted that workshop participants included transnationalnongovernmental organizations and transnational corporations—entities that increasingly have diplomatic roles.The workshop concluded with general agreement that the UNsystem offers useful platforms within which to promote productivedialogue and through which to encourage the development ofpossible governance approaches between disparate stakeholders.All participants expressed the belief that beyond discussions onLAWS, broad understanding of and discourse about potentialmilitary applications of AI—its benefits, risks, and governancechallenges—is nascent and, indeed, underdeveloped. Participantswelcomed and encouraged more opportunities for stakeholdersto educate each other, to communicate, and to innovate aroundthe hard problems posed by military applications of AI.Endnote1 ransforming Our World: The 2030 Agenda for SustainableTDevelopment, Sustainable Development Goals KnowledgePlatform, United Nations, accessed November 22, 5/transformingourworld.About the AuthorMelanie Sisson is a Nonresident Fellow, Defense Strategy and Planning Program, with the Stimson Center.This working paper was prepared for a workshop, organized by the Stanley Center for Peace and Security, UNODA, and the Stimson Center,on The Militarization of Artificial Intelligence.Report5

Discussion Papers IntroductionArtificial Intelligence, NuclearWeapons, and Strategic StabilityJennifer Spindel The University of New HampshireFrom a smart vacuum that can learn floor plans to “killer robots”that can revolutionize the battlefield, artificial intelligence haspotential applications both banal and extraordinary. Whileapplications in health care, agriculture, and business logisticscan drive forward human development, military applications ofartificial intelligence might make war more likely and/or increaseits lethality. In both fact and science fiction, many of these newtechnologies are being developed by the private sector, introducingnew governance challenges and stakeholders to conversationsabout the implications of new weapons development.Paul Scharre notes that artificial intelligence is a general-purposeenabling technology, not unlike electricity. Wu Jinhuai agreesthat AI will have wide applications to fields including agriculture,manufacturing, and health care, and he broadly defines artificialintelligence as the theories, methods, technologies, andapplication systems for stimulating, extending, and expandinghuman intelligence. While Vadim Kozyulin does not give adefinition of AI, he explains that commercial companies, includingAmazon, Microsoft, IBM, and Google, have created most artificialintelligence tools and then offered them to the military.To begin addressing challenges related to the militarization ofartificial intelligence, the Stanley Center for Peace and Security,in partnership with the United Nations Office for DisarmamentAffairs and the Stimson Center, commissioned working papersfrom authors Paul Scharre, Vadim Kozyulin, and Wu Jinhuai. Thisintroductory paper provides background context to orient readersand highlights similarities and differences between those papers.It is organized around three primary sections: first, the difficultiesin determining what artificial intelligence is or means; second, theways artificial intelligence can affect the character of war in thebroadest sense; and finally, the promises and pitfalls of applyingartificial intelligence to nuclear weapons and systems.Because AI is not a single technology, the authors suggestvarious ways it could be applied to the military realm. Kozyulin,for example, points out that the Russian Ministry of Defense isinterested in “combat robots.” These robots are “multi-functionaldevice[s] with anthropomorphic (humanlike) behavior, partiallyor fully performing functions of a person in executing a certaincombat mission. [They include] a sensor system (sensors) foracquiring information, a control system, and actuation devices.”Wu and Scharre suggest less overtly militarized applications ofAI, including intelligence, surveillance, and reconnaissance (ISR)operations, or actually analyzing and interpreting sensor data,or geospatial imagery analysis. Whether it is used for combatrobots or analyzing data, artificial intelligence has the potential todecrease human involvement in war. As the next section discusses,this means AI could fundamentally change the character of war.C-3PO, Terminator, or Roomba:What Is Artificial Intelligence?International discussions on artificial intelligence (AI) governanceoften revolve around the challenges of defining “artificialintelligence.” AI is a diverse category that includes smart vacuumsthat learn floor plans and weapons that can acquire, identify, anddecide to engage a target without human involvement. Definingwhat counts as AI, even in more-narrow military contexts,remains difficult. The working paper authors agree that artificialintelligence can mean many things and therefore has multipleapplications to the military realm.6Stanley Center for Peace and SecurityThe Evolving Character of WarThough conflict carried out entirely, or even primarily, by combatrobots is an unlikely sce

The risks of introducing artificial intelligence into national militaries are not small. Lethal autonomous weapon systems (LAWS) receive popular attention because such systems are easily imagined .

Related Documents:

May 02, 2018 · D. Program Evaluation ͟The organization has provided a description of the framework for how each program will be evaluated. The framework should include all the elements below: ͟The evaluation methods are cost-effective for the organization ͟Quantitative and qualitative data is being collected (at Basics tier, data collection must have begun)

Silat is a combative art of self-defense and survival rooted from Matay archipelago. It was traced at thé early of Langkasuka Kingdom (2nd century CE) till thé reign of Melaka (Malaysia) Sultanate era (13th century). Silat has now evolved to become part of social culture and tradition with thé appearance of a fine physical and spiritual .

On an exceptional basis, Member States may request UNESCO to provide thé candidates with access to thé platform so they can complète thé form by themselves. Thèse requests must be addressed to esd rize unesco. or by 15 A ril 2021 UNESCO will provide thé nomineewith accessto thé platform via their émail address.

̶The leading indicator of employee engagement is based on the quality of the relationship between employee and supervisor Empower your managers! ̶Help them understand the impact on the organization ̶Share important changes, plan options, tasks, and deadlines ̶Provide key messages and talking points ̶Prepare them to answer employee questions

Dr. Sunita Bharatwal** Dr. Pawan Garga*** Abstract Customer satisfaction is derived from thè functionalities and values, a product or Service can provide. The current study aims to segregate thè dimensions of ordine Service quality and gather insights on its impact on web shopping. The trends of purchases have

Chính Văn.- Còn đức Thế tôn thì tuệ giác cực kỳ trong sạch 8: hiện hành bất nhị 9, đạt đến vô tướng 10, đứng vào chỗ đứng của các đức Thế tôn 11, thể hiện tính bình đẳng của các Ngài, đến chỗ không còn chướng ngại 12, giáo pháp không thể khuynh đảo, tâm thức không bị cản trở, cái được

Of course, this militarization also has financial costs. Those costs have been colossal, . foreign and domestic policies at a cost of 21 trillion over the last two decades. Of the 21 trillion the U.S. has spent on foreign and domestic militarization since 9/11, 16 trillion went to the military (including 7.2 trillion for military .

Artificial Intelligence -a brief introduction Project Management and Artificial Intelligence -Beyond human imagination! November 2018 7 Artificial Intelligence Applications Artificial Intelligence is the ability of a system to perform tasks through intelligent deduction, when provided with an abstract set of information.