SPOTLIGHT ON ARTIFICIAL INTELLIGENCE AND FREEDOM OF EXPRESSION

3y ago
63 Views
4 Downloads
1.76 MB
74 Pages
Last View : 15d ago
Last Download : 3m ago
Upload by : Luis Wallis
Transcription

SPOTLIGHT ONARTIFICIAL INTELLIGENCE ANDFREEDOM OF EXPRESSION#SAIFE #AIFreeSpeech1

Spotlight on Artificial Intelligence and Freedom of Expression#SAIFEAuthor: Barbora BukovskaContributors: Amy Brouillette, Barbora Bukovska, Julia Haas,Fanny Hidvégi, Nani Jansen, Lorena Jaume-Palasi, Carly Kind,Bojana Kostic, Đorđe Krivokapic, Emma Llanso, Victor Naumov,Eliška Pírková, Krisztina Rozgonyi, and Martin ScheininEdited by: Julia HaasDesign & Layout by: Peno MishoyanCommissioned by Harlem Désir, the OSCE Representative onFreedom of the Media 2020 Office of the Representative on Freedom of the MediaOrganization for Security and Co-operation in Europe (OSCE)Wallnerstrasse 6A-1010 Vienna, AustriaTel.: 43-1 514 36 68 00Fax: 43-1 514 36 6802E-mail: pm-fom@osce.orghttp://www.osce.org/fomISBN: 978-3-903128-54-5

Table of ContentsForeword 5Executive Summary 11Introduction 17Background 25Artificial Intelligence 27The Role of Internet Intermediaries asGatekeepers ofFreedom of Expression 28Context of the AI Processes 29AI and Content Moderation 32Applicable International and Regional Standards 35Standards on AI and Human Rights 37Standards on AI and Freedom of Expression 38AI and the Responsibilities of the Private Sector 40AI and Ethics 42Freedom of Expression and AI: Overall Problems 45Case Studies 53The Use of AI and “Security Threats” 55The Use of AI and “Hate Speech” 57AI and Media Pluralism and Diversity 59AI-Powered State Surveillance 61Conclusion and Recommendations 653

Foreword5

ForewordSince the early stages of the internet, various technologies have beendeployed to enable and facilitate online communication. Over the lastyears, machine-learning technologies, such as artificial intelligence(AI), have become increasingly important tools for shaping andarbitrating online information. AI-powered tools rely on the collectionand processing of vast amounts of data, which in turn are frequentlymonetized, and are often used for detecting, evaluating and moderatingcontent at scale, oftentimes with a view to identify and filter out illegaland potentially harmful content. At the same time, AI is used forranking, promoting and demoting massive amounts of content online.AI has become a major reality of the information sphere, of onlinecontent moderation, and of the prioritization of information. Howdo we want to use algorithms and machine-learning tools in such animportant domain of our lives? How will these tools be controlledand by whom? There is a genuine risk that such technologies couldhave a detrimental impact on fundamental freedoms, especiallywhen driven by commercial or political interests. The use of AI couldseriously jeopardize the enjoyment of our human rights, in particularthe freedom of expression. Moreover, given that most AI-powered toolslack transparency and accountability, as well as effective remedies, theirincreasing use risks exacerbating existing challenges to free speech,access to information and media pluralism.For these reasons, last year, I launched a project to put a spotlight onAI and freedom of expression (#SAIFE). This #SAIFE initiative focuseson the profound impact that the use of AI has on seeking, receivingand imparting information and ideas. In March 2020, I publishedan introductory non-paper to promote a clearer understanding ofthe policies and practices of governments, regulators, and internetintermediaries in their use of AI. I hope that the introductory non-7

Forewordpaper contributed to unveiling the complexity of the impact thatthese technologies can have on freedom of expression and accessto information.Based on the introductory non-paper and discussions within theproject, I am pleased to present this Paper now, which I hope willprovide guidance for further discussions and actions to prevent anyintentional or unintentional negative implications of the use of AI onfree speech. It is only through close co-operation at both national andinternational levels, as well as among various stakeholders, includingcivil society and the tech industry, that a responsible and humanrights-friendly use of AI can be ensured. Only then can illegal andpotentially harmful content, such as speech presenting “securitythreats” or racism, anti-Semitism and “hate speech”, be tackled,and pluralistic democratic discourse be promoted without harmingdemocracy as such.While many international actors discuss important questionsaround the impact of AI on the enjoyment of human rights, myOffice’s #SAIFE project focuses specifically on the impact of AI onfreedom of expression, including the impact of AI on journalisticwork and the overall media environment. While the challengesvary from country to country, with diverse national practices anddifferent internet intermediaries prevalent, it is clear that challengesto free speech stemming from an increased use of AI exist across theentire OSCE region.I hope that this publication, with its preliminary recommendations,will serve as a useful reference for much-needed discussions andfor identifying a way forward to safeguard free speech whendeploying AI.8

ForewordAs a next step, this Paper, which will be discussed at an online eventon 8 July,1 will be followed by a public consultation phase, which willprovide the foundation for a further elaboration of concrete policyrecommendations.I want to sincerely thank all the contributors to this paper, 2 especiallyBarbora Bukovska who drafted this Paper, and Đorđe Krivokapic, themain author of the introductory non-paper that has been integratedinto this publication. Finally, a special thanks to Julia Haas of myOffice, and all colleagues who have contributed to making thispublication possible.3 July 2020Harlem Désir, OSCE Representative on Freedom of the Media1 OSCE RFoM event on The Rise of Artificial Intelligence & How it will Reshape theFuture of Free Speech, 8 July 2020.2 Amy Brouillette, Fanny Hidvégi, Nani Jansen, Lorena Jaume-Palasi, Carly Kind, BojanaKostic, Emma Llanso, Victor Naumov, Eliška Pírková, Krisztina Rozgonyi, and MartinScheinin.9

Executive Summary11

Executive SummaryArtificial intelligence (AI) – a broad concept used in policy discussionsto refer to many different types of technology – greatly influences andimpacts the way people seek, receive, impart and access informationand how they exercise their right to freedom of expression in the digitalecosystem. If implemented responsibly, AI can benefit societies, but thereis a genuine risk that its deployment by States and private companies,such as internet intermediaries, could have a deteriorating effect onhuman rights.When considering the impact of AI on freedom of expression, underlyingissues that need to be taken into account include the business model ofmost internet intermediaries, based on the collection and processing ofmassive amounts of data about their users. Individual users are constantlysurveyed for their online behaviour. Personal digital footprints, even ifsmall, are sufficient for various online services powered by AI to classifyusers in pre-developed profiles and to predict customer “needs” based onthe data of other, supposedly similar people. Most individuals are neitherinformed that these processes are taking place, nor are they aware of theirworkings and potentially discriminatory aspects.Internet intermediaries use AI in various stages of content moderationand content curation on their platforms: from uploads, to making certaincontent more visible to users, to the removal of content. The deploymentof AI in these processes creates risks for freedom of expression. Whetherparticular content should be removed (either for violation of the law orcommunity guidelines) often depends on a number of issues, includingthe context, which is difficult to assess without human involvement. Thiscould potentially lead to the removal of legitimate expression, or failureto remove content that could have a detrimental impact on many users.When it comes to content curation, the monetization of users’ attentionand engagement has had a great impact on diversity and pluralism13

Executive Summaryonline. This is all the more poignant, since the criteria that internetintermediaries apply are usually not open to the public. These problemscut across all types of content but are most prominent in the area of“security threats” and “hate speech”.This Paper addresses these challenges, building on the initial work ofthe OSCE Representative on Freedom of the Media. It maps the keychallenges to freedom of expression presented by AI across the OSCEregion, in light of international and regional standards on human rightsand AI. It identifies a number of overarching problems that AI poses tofreedom of expression and human rights in general, in particular: The limited understanding of the implications for freedom ofexpression caused by AI, in particular machine learning; Lack of respect for freedom of expression in content moderationand curation; State and non-State actors circumventing due process and rule oflaw in AI-powered content moderation; Lack of transparency regarding the entire process of AI design,deployment and implementation; Lack of accountability and independent oversight over AI systems; Lack of effective remedies for violation of the right to freedom ofexpression in relation to AI.This Paper observes that these problems became more pronounced in thefirst months of 2020, when the COVID-19 pandemic incentivized Statesand the private sector to use AI even more, as part of measures introducedin response to the pandemic. A tendency to revert to technocraticsolutions, including AI-powered tools, without adequate societal debateor democratic scrutiny was witnessed.14

Executive SummaryUsing four specific case studies (“security threats”; “hate speech”; mediapluralism and diversity online; and the impact of AI-powered Statesurveillance on freedom of expression), this Paper shows how theseproblems manifest themselves.This Paper concludes that there is a need to further raise awareness, andimprove understanding, of the impact of AI related to decision-makingpolicies and practices on freedom of expression, next to having a moresystematic overview of regional approaches and methodologies in theOSCE region. It provides a number of preliminary recommendations toOSCE participating States and internet intermediaries, to help ensurethat freedom of expression and information are better protected whenAI is deployed.15

Introduction17

IntroductionThe development and use of artificial intelligence (AI) – a broadconcept used in policy discussions to refer to many different typesof technology – has rapidly expanded over the last two decades.The primary cause of this development has been the mainstreamadaptation of widely accessible and affordable technology. Theavailability of large amounts of data, collected mostly by private actorsbased on their data-driven business models, has been a driving factor.3Consequently, AI has become a part of many aspects of people’s dailylives – ranging from commerce, traffic management, policing and lawenforcement, health diagnostics and health care, to public servicesand governance.The use of AI has further increased with the exponential growthof the sharing and re-sharing of content generated by internetusers. 4 Internet intermediaries, in particular social media platformsand search engines, now typically deploy AI systems to manageinformation flows and to shape and arbitrate content online.5 AI isused both in content curation (supporting the distribution of contentto audiences, such as content ranking or editorial data analysis), as wellas in content removal (filtering and taking down illegal or otherwiseproblematic content). The use of AI-powered technologies by internetintermediaries has also been fostered by increased pressure from3 The problems of data collection and the underlying business model of internet intermediaries are discussed in greater detail in the subsequent sections.4 According to available data, every single hour, more than 500hours of videos are uploaded onto YouTube and 14.58 million photos on Facebook. For more information, see, e.g.,Omnicore statistics.5 This Paper uses the term “AI” as an umbrella term that encompasses various concepts.It also acknowledges that the largest internet intermediaries (in particular social mediacompanies Facebook, YouTube and Twitter) do routinely use machine-learning classifiersand algorithmic filtering techniques to detect problematic content (i.e., “hate speech”or spam), and co-ordinated inauthentic behaviour, or ranking and recommendationalgorithms to promote and target content; while these systems are not necessarily “AI” instricto sensu. Many smaller intermediaries, due to their resources capabilities, use muchsimpler algorithms to organize content.19

IntroductionStates on them to remove certain contentwithin very short and stricttime periods.6While – if implemented responsibly – AI can benefit societies andprovide some positive changes,7 there is a genuine risk that underlyingpolitical, commercial or other interests could have a deterioratingeffect on human rights. 8 A number of reports and studies, whichexamine the impact of AI on human rights – in particular of thosegroups in society that are in danger of discrimination – have alreadydocumented these risks.9As AI greatly influences the way people seek, receive, impart and accessinformation in the digital ecosystem, the ever increasing, pervasive6 See, e.g., the Network Enforcement Act (Netzwerkdurchsetzungsgesetz) of Germany,adopted on 17 June 2017; Directive on Copyright and related rights in the Digital SingleMarket, (EU) 2019/790, European Parliament, 17 April 2019; the EU Code of conduct oncountering illegal hate speech online, European Commission, Twitter, Facebook, Microsoft and YouTube, 30 June 2016.7 For example, in the field of the media, it has been recognized that the automation ofsome tasks (such as speech-to-text transcription) can lead to better productivity, improvedability to predict demand to adjust resources, or better access to relevant data; see, e.g.,D. James, How artificial intelligence is transforming the media industry, The Record, 7September 2018; or R. Shields, What the media industry really thinks about the impact ofAI, The Drum, 6 July 2018.8 See, e.g., R. F. Jørgensen, Human rights in the age of platforms, MIT Press, 2019.9 See, e.g., Ranking Digital Rights, Human rights risk scenarios: Algorithms, machinelearning and automated decision-making (Consultation Draft), 2020; the Committee ofExperts on Internet Intermediaries (MSI-NET), Algorithms and Human Rights, Study onthe Human Rights Dimensions of Automated Data Processing Techniques (in ParticularAlgorithms) and Possible Regulatory Implications, DGI(2017)12, March 2018; the ExpertCommittee on human rights dimensions of automated data processing and differentforms of artificial intelligence (MSI-AUT) of the Council of Europe, Responsibility of AI:A study of the implications of advanced digital technologies (including AI systems) forthe concept of responsibility within a human rights framework, DGI(2019)05, September2019; Privacy International and ARTICLE 19, Privacy and Freedom of Expression In theAge of Artificial Intelligence, April 2018; AccessNow, 26 recommendations on contentgovernance: a guide for lawmakers, regulators, and company policy makers20

Introductionand often invisible use of AI by both public authorities10 and privatecompanies, coupled with their ability to identify and track people,can have a chilling effect on the right to freedom of expression andinformation. It can lead to self-censorship and altered behaviour, bothin online and offline spaces, especially of dissenting voices. This, in turn,may lead to less plurality and diversity of speech, which would ultimatelyimpede on the free flow of information and democratic discourse.The impact of AI on freedom of expression has been recognized by theinternational community, and a number of international and regionalbodies have called for respect of human rights in the context of AI.11However, there is still a need for all stakeholders to understand betterwhat the specific implications of AI are for freedom of expression andfreedom of the media, and how the existing freedom of expressionframework applies to instances where AI is used.This need for a better understanding became even more clear in thefirst months of 2020, when the global outbreak of COVID-19 led manyStates to introduce emergency powers. During this period, there was anexponential increase in the use of AI-powered surveillance by States, andan increase in reliance on AI in online content moderation by internetintermediaries.12 Looking ahead, there is a strong tendency to implementAI across the board more often, making its potential, for good and forbad, even more pronounced.10 See, e.g., M. Kuziemski & G. Misuraca, AI governance in the public sector: Three talesfrom the frontiers of automated decision-making in democratic settings, Telecommunications Policy, Volume 44, Issue 6, 2020. It concludes that “power has proven to be a centralconsideration for the use cases of AI in the public sector – by embracing automated methods, one gains control over the physical space, vital resources and information,” p.10.11 See Chapter V. Applicable International and Regional Standards and VI. Freedom ofExpression and AI: Overall Problems.12 See, e.g., ARTICLE 19, Coronavirus must not be used as an excuse to entrench surveillance, 20 March 2020; Privacy International, Telco data and Covid-19: A primer, 21 April2020; or EDRI, COVID-Tech: Surveillance is a pre-existing condition, 27 May 2020.21

IntroductionIt is crucial that all State and non-State actors critically evaluate theimpact of AI on freedom of expression and freedom of the media. Asan international security organization with a strong human rightscomponent, the Organization for Security and Co-operation in Europe(OSCE), and especially the Office of the Representative on Freedom ofthe Media (RFoM), is well placed to help ensure consistency with globaland regional human rights standards in this area. Building on the initialwork of the RFoM in this area and initial discussions in 2019 and early2020,13 this Paper seeks to provide guidance to participating States inthis process. Next to OSCE commitments, the Paper refers to relevantrecommendations, developed within other regional bodies, in particularthe EU and the Council of Europe.The structure of this Paper is as follows: It f irst exa mines some key contextual issues a s wella s t he societ a l and leg islative landscape t h at mu stbe considered when developing f ree speech-complia ntactions and policies for the development and use of AI. Second, it provides a brief overview of the applicable standardsfor the protection of freedom of expression in the context of thedeployment and use of AI at international and regional levels.This is followed by an examination of the main challengesthat AI poses to freedom of expression and freedom of themedia overall – in particular the potential lack of transparency,accountability and respect for the rule of law in AI processes. Subsequently, the Paper offers case studies on how AI impacts theright to freedom of expression in specific areas of concern:13 OSCE RFoM, Non-paper on the impact of artificial intelligence on freedom of expression, 4 March 2020.22

Introduction ›First, it explores the challenges to freedom of expressionstemming from AI-powered moderation of certain content(the content that presents “security threats” and “hatespeech”), the associated “surveillance capitalism”, and thechallenges this presents to media pluralism and diversity online.›Second, it outlines the impact of AI-powered State surveillanceon the right to freedom of expression, the freedom of themedia, and the ability of journalists to carry out their wor

Artificial intelligence (AI) – a broad concept used in policy discussions to refer to many different types of technology – greatly influences and impacts the way people seek, receive, impart and access information and how they exercise their right to freedom of expression in the digital ecosystem. If implemented responsibly, AI can benefit societies, but there is a genuine risk that its .

Related Documents:

Week 3: Spotlight 21 Week 4 : Worksheet 22 Week 4: Spotlight 23 Week 5 : Worksheet 24 Week 5: Spotlight 25 Week 6 : Worksheet 26 Week 6: Spotlight 27 Week 7 : Worksheet 28 Week 7: Spotlight 29 Week 8 : Worksheet 30 Week 8: Spotlight 31 Week 9 : Worksheet 32 Week 9: Spotlight 33 Week 10 : Worksheet 34 Week 10: Spotlight 35 Week 11 : Worksheet 36 .

Artificial Intelligence -a brief introduction Project Management and Artificial Intelligence -Beyond human imagination! November 2018 7 Artificial Intelligence Applications Artificial Intelligence is the ability of a system to perform tasks through intelligent deduction, when provided with an abstract set of information.

2. Support the Spotlight LED and remove the (3) screws holding it in place. See Figures 7 and 8. 3. Gently pull the Spotlight LED cable located on the right end of the Spotlight LED until the connector is visible. Disconnect the connector and remove LED. See Figure 9. 4. Connect the new Spotlight LED to the connector and feed the cable back .

and artificial intelligence expert, joined Ernst & Young as the person in charge of its global innovative artificial intelligence team. In recent years, many countries have been competing to carry out research and application of artificial intelli-gence, and the call for he use of artificial

Artificial Intelligence and Its Military Implications China Arms Control and Disarmament Association July 2019 What Is Artificial Intelligence? Artificial intelligence (AI) refers to the research and development of the theories, methods, technologies, and application systems for

BCS Foundation Certificate in Artificial Intelligence V1.1 Oct 2020 Syllabus Learning Objectives 1. Ethical and Sustainable Human and Artificial Intelligence (20%) Candidates will be able to: 1.1. Recall the general definition of Human and Artificial Intelligence (AI). 1.1.1. Describe the concept of intelligent agents. 1.1.2. Describe a modern .

IN ARTIFICIAL INTELLIGENCE Stuart Russell and Peter Norvig, Editors FORSYTH & PONCE Computer Vision: A Modern Approach GRAHAM ANSI Common Lisp JURAFSKY & MARTIN Speech and Language Processing, 2nd ed. NEAPOLITAN Learning Bayesian Networks RUSSELL & NORVIG Artificial Intelligence: A Modern Approach, 3rd ed. Artificial Intelligence A Modern Approach Third Edition Stuart J. Russell and Peter .

1 Introduction - Artificial Intelligence Artificial intelligence currently represents one of the fastest growing fields. According to the 2019 CIO Survey conducted by Gartner, Inc., one of the leading research and advisory companies, the percentage of enterprises implementing artificial intelligence grew 270 percent in the past four years.