Existential Risk Prevention As Global Priority

3y ago
21 Views
3 Downloads
278.35 KB
17 Pages
Last View : 13d ago
Last Download : 3m ago
Upload by : Konnor Frawley
Transcription

Global Policy Volume 4 . Issue 1 . February 201315Existential Risk Prevention as GlobalPriorityUniversity of OxfordAbstractExistential risks are those that threaten the entire future of humanity. Many theories of value imply that even relativelysmall reductions in net existential risk have enormous expected value. Despite their importance, issues surroundinghuman-extinction risks and related hazards remain poorly understood. In this article, I clarify the concept of existentialrisk and develop an improved classification scheme. I discuss the relation between existential risks and basic issues inaxiology, and show how existential risk reduction (via the maxipok rule) can serve as a strongly action-guidingprinciple for utilitarian concerns. I also show how the notion of existential risk suggests a new way of thinking aboutthe ideal of sustainability.Policy Implications Existential risk is a concept that can focus long-term global efforts and sustainability concerns.The biggest existential risks are anthropogenic and related to potential future technologies.A moral case can be made that existential risk reduction is strictly more important than any other global publicgood.Sustainability should be reconceptualised in dynamic terms, as aiming for a sustainable trajectory rather than a sustainable state.Some small existential risks can be mitigated today directly (e.g. asteroids) or indirectly (by building resilience andreserves to increase survivability in a range of extreme scenarios) but it is more important to build capacity toimprove humanity’s ability to deal with the larger existential risks that will arise later in this century. This willrequire collective wisdom, technology foresight, and the ability when necessary to mobilise a strong global coordinated response to anticipated existential risks.Perhaps the most cost-effective way to reduce existential risks today is to fund analysis of a wide range of existential risks and potential mitigation strategies, with a long-term perspective.1. The maxipok ruleExistential risk and uncertaintyAn existential risk is one that threatens the prematureextinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirablefuture development (Bostrom, 2002). Although it is oftendifficult to assess the probability of existential risks, thereare many reasons to suppose that the total such risk confronting humanity over the next few centuries is significant.Estimates of 10–20 per cent total existential risk in this century are fairly typical among those who have examined theissue, though inevitably such estimates rely heavily on subjective judgment.1 The most reasonable estimate might besubstantially higher or lower. But perhaps the strongest reason for judging the total existential risk within the next fewGlobal Policy (2013) 4:1 doi: 10.1111/1758-5899.12002centuries to be significant is the extreme magnitude of thevalues at stake. Even a small probability of existential catastrophe could be highly practically significant (Bostrom,2003; Matheny, 2007; Posner, 2004; Weitzman, 2009).Humanity has survived what we might call naturalexistential risks for hundreds of thousands of years; thusit is prima facie unlikely that any of them will do us inwithin the next hundred.2 This conclusion is buttressedwhen we analyse specific risks from nature, such asasteroid impacts, supervolcanic eruptions, earthquakes,gamma-ray bursts, and so forth: Empirical impact distributions and scientific models suggest that the likelihoodof extinction because of these kinds of risk is extremelysmall on a time scale of a century or so.3In contrast, our species is introducing entirely new kindsof existential risk—threats we have no track record ofª 2013 University of Durham and John Wiley & Sons, Ltd.Research ArticleNick Bostrom

Nick Bostrom16surviving. Our longevity as a species therefore offers nostrong prior grounds for confident optimism. Consideration of specific existential-risk scenarios bears out the suspicion that the great bulk of existential risk in theforeseeable future consists of anthropogenic existentialrisks—that is, those arising from human activity. In particular, most of the biggest existential risks seem to be linkedto potential future technological breakthroughs that mayradically expand our ability to manipulate the externalworld or our own biology. As our powers expand, so willthe scale of their potential consequences—intended andunintended, positive and negative. For example, thereappear to be significant existential risks in some of theadvanced forms of biotechnology, molecular nanotechnology, and machine intelligence that might be developed inthe decades ahead. The bulk of existential risk over the nextcentury may thus reside in rather speculative scenarios towhich we cannot assign precise probabilities through anyrigorous statistical or scientific method. But the fact thatthe probability of some risk is difficult to quantify does notimply that the risk is negligible.Probability can be understood in different senses. Mostrelevant here is the epistemic sense in which probabilityis construed as (something like) the credence that an ideally reasonable observer should assign to the risk’s materialising based on currently available evidence.4 Ifsomething cannot presently be known to be objectivelysafe, it is risky at least in the subjective sense relevant todecision making. An empty cave is unsafe in just thissense if you cannot tell whether or not it is home to ahungry lion. It would be rational for you to avoid thecave if you reasonably judge that the expected harm ofentry outweighs the expected benefit.The uncertainty and error-proneness of our first-orderassessments of risk is itself something we must factorinto our all-things-considered probability assignments.This factor often dominates in low-probability, highconsequence risks—especially those involving poorlyunderstood natural phenomena, complex social dynamics,or new technology, or that are difficult to assess for otherreasons. Suppose that some scientific analysis A indicatesthat some catastrophe X has an extremely small probabilityP(X) of occurring. Then the probability that A has somehidden crucial flaw may easily be much greater than P(X).5Furthermore, the conditional probability of X given that Ais crucially flawed, P(X !A), may be fairly high. We maythen find that most of the risk of X resides in the uncertainty of our scientific assessment that P(X) was small(Figure 1) (Ord, Hillerbrand and Sandberg, 2010).Qualitative risk categoriesSince a risk is a prospect that is negatively evaluated,the seriousness of a risk—indeed, what is to be regardedª 2013 University of Durham and John Wiley & Sons, Ltd.Figure 1. Meta-level uncertainty.Source: Ord et al., 2010. Factoring in the fallibility of our firstorder risk assessments can amplify the probability of risksassessed to be extremely small. An initial analysis (left side) givesa small probability of a disaster (black stripe). But the analysiscould be wrong; this is represented by the grey area (right side).Most of the all-things-considered risk may lie in the grey arearather than in the black stripe.as risky at all—depends on an evaluation. Before we candetermine the seriousness of a risk, we must specify astandard of evaluation by which the negative value of aparticular possible loss scenario is measured. There areseveral types of such evaluation standard. For example,one could use a utility function that represents someparticular agent’s preferences over various outcomes.This might be appropriate when one’s duty is to givedecision support to a particular decision maker. But herewe will consider a normative evaluation, an ethically warranted assignment of value to various possible outcomes. This type of evaluation is more relevant when weare inquiring into what our society’s (or our own individual) risk-mitigation priorities ought to be.There are conflicting theories in moral philosophyabout which normative evaluations are correct. I will nothere attempt to adjudicate any foundational axiologicaldisagreement. Instead, let us consider a simplified version of one important class of normative theories. Let ussuppose that the lives of persons usually have some significant positive value and that this value is aggregative(in the sense that the value of two similar lives is twicethat of one life). Let us also assume that, holding thequality and duration of a life constant, its value does notdepend on when it occurs or on whether it alreadyexists or is yet to be brought into existence as a resultof future events and choices. These assumptions couldbe relaxed and complications could be introduced, butwe will confine our discussion to the simplest case.Within this framework, then, we can roughly characterise a risk’s seriousness using three variables: scope (thesize of the population at risk), severity (how badly thispopulation would be affected), and probability (howlikely the disaster is to occur, according to the most reasonable judgment, given currently available evidence).Using the first two of these variables, we can construct aqualitative diagram of different types of risk (Figure 2).Global Policy (2013) 4:1

Existential Risk Prevention as Global Priority17Figure 2. Qualitative risk categories.SCOPE(cosmic)One originalPicasso paintingdestroyedDestruction ofcultural heritagetrans-generationalBiodiversityreduced by onespecies of beetleDark ageglobalGlobal warmingby 0.01 CºThinning ofozone layerpan-generationalXexistential riskAgingEphemeralglobal tyrannyglobal catastrophic risklocalpersonalCongestion fromone extra vehicleRecession in onecountryGenocideLoss of onehairCar is stolenFatal lish)Source: Author.Note: The scope of a risk can be personal (affecting only one person), local (affecting some geographical region or a distinct group),global (affecting the entire human population or a large part thereof), trans-generational (affecting humanity for numerous generations,or pan-generational (affecting humanity over all, or almost all, future generations). The severity of a risk can be classified as imperceptible (barely noticeable), endurable (causing significant harm but not completely ruining quality of life), or crushing (causing death or apermanent and drastic reduction of quality of life).(The probability dimension could be displayed along thez-axis.)The area marked ‘X’ in Figure 2 represents existentialrisks. This is the category of risks that have (at least) crushing severity and (at least) pan-generational scope.6 Asnoted, an existential risk is one that threatens to cause theextinction of Earth-originating intelligent life or the permanent and drastic failure of that life to realise its potential for desirable development. In other words, anexistential risk jeopardises the entire future of humankind.Magnitude of expected loss in existential catastropheHolding probability constant, risks become more seriousas we move toward the upper-right region of Figure 2.For any fixed probability, existential risks are thus moreserious than other risk categories. But just how muchmore serious might not be intuitively obvious. Onemight think we could get a grip on how bad an existential catastrophe would be by considering some of theworst historical disasters we can think of—such as thetwo world wars, the Spanish flu pandemic, or the Holocaust—and then imagining something just a bit worse.Yet if we look at global population statistics over time,we find that these horrible events of the past century failto register (Figure 3).Global Policy (2013) 4:1But even this reflection fails to bring out the seriousness of existential risk. What makes existential catastrophes especially bad is not that they would show uprobustly on a plot like the one in Figure 3, causing aprecipitous drop in world population or average qualityof life. Instead, their significance lies primarily in the factthat they would destroy the future. The philosopherDerek Parfit made a similar point with the followingthought experiment:I believe that if we destroy mankind, as we nowcan, this outcome will be much worse thanmost people think. Compare three outcomes:1. Peace.2. A nuclear war that kills 99 per cent of the world’sexisting population.3. A nuclear war that kills 100 per cent.2 would be worse than 1, and 3 would beworse than 2. Which is the greater of these twodifferences? Most people believe that thegreater difference is between 1 and 2. I believethat the difference between 2 and 3 is verymuch greater. The Earth will remain habitablefor at least another billion years. Civilisationª 2013 University of Durham and John Wiley & Sons, Ltd.

Nick Bostrom18Figure 3. World population over the last century.Source: Author.Note: Calamities such as the Spanish flu pandemic, the two world wars, and the Holocaust scarcely register. (If one stares hard at thegraph, one can perhaps just barely make out a slight temporary reduction in the rate of growth of the world population during theseevents).began only a few thousand years ago. If we donot destroy mankind, these few thousand yearsmay be only a tiny fraction of the whole of civilised human history. The difference between 2and 3 may thus be the difference between thistiny fraction and all of the rest of this history. Ifwe compare this possible history to a day, whathas occurred so far is only a fraction of asecond (Parfit, 1984, pp. 453–454).To calculate the loss associated with an existentialcatastrophe, we must consider how much value wouldcome to exist in its absence. It turns out that the ultimate potential for Earth-originating intelligent life is literally astronomical.One gets a large number even if one confines one’sconsideration to the potential for biological humanbeings living on Earth. If we suppose with Parfit that ourplanet will remain habitable for at least another billionyears, and we assume that at least one billion peoplecould live on it sustainably, then the potential exist forat least 1016 human lives of normal duration. These livescould also be considerably better than the average contemporary human life, which is so often marred by disease, poverty, injustice, and various biological limitationsª 2013 University of Durham and John Wiley & Sons, Ltd.that could be partly overcome through continuingtechnological and moral progress.However, the relevant figure is not how many peoplecould live on Earth but how many descendants we couldhave in total. One lower bound of the number of biological human life-years in the future accessible universe(based on current cosmological estimates) is 1034 years.7Another estimate, which assumes that future minds willbe mainly implemented in computational hardwareinstead of biological neuronal wetware, produces alower bound of 1054 human-brain-emulation subjectivelife-years (or 1071 basic computational operations)(Bostrom, 2003).8 If we make the less conservativeassumption that future civilisations could eventually pressclose to the absolute bounds of known physics (usingsome as yet unimagined technology), we get radicallyhigher estimates of the amount of computation and memory storage that is achievable and thus of the number ofyears of subjective experience that could be realised.9Even if we use the most conservative of these estimates, which entirely ignores the possibility of spacecolonisation and software minds, we find that theexpected loss of an existential catastrophe is greaterthan the value of 1016 human lives. This implies that theexpected value of reducing existential risk by a mere oneGlobal Policy (2013) 4:1

Existential Risk Prevention as Global Prioritymillionth of one percentage point is at least a hundredtimes the value of a million human lives. The more technologically comprehensive estimate of 1054 humanbrain-emulation subjective life-years (or 1052 lives ofordinary length) makes the same point even morestarkly. Even if we give this allegedly lower bound onthe cumulative output potential of a technologicallymature civilisation a mere 1 per cent chance of beingcorrect, we find that the expected value of reducingexistential risk by a mere one billionth of one billionth ofone percentage point is worth a hundred billion times asmuch as a billion human lives.One might consequently argue that even the tiniestreduction of existential risk has an expected valuegreater than that of the definite provision of any ‘ordinary’ good, such as the direct benefit of saving 1 billionlives. And, further, that the absolute value of the indirecteffect of saving 1 billion lives on the total cumulativeamount of existential risk—positive or negative—isalmost certainly larger than the positive value of thedirect benefit of such an action.10MaxipokThese considerations suggest that the loss in expectedvalue resulting from an existential catastrophe is soenormous that the objective of reducing existential risksshould be a dominant consideration whenever we actout of an impersonal concern for humankind as a whole.It may be useful to adopt the following rule of thumbfor such impersonal moral action:Maxipok Maximise the probability of an ‘OK outcome’,where an OK outcome is any outcome that avoids existential catastrophe.At best, maxipok is a rule of thumb or a prima faciesuggestion. It is not a principle of absolute validity, sincethere clearly are moral ends other than the preventionof existential catastrophe. The principle’s usefulness is asan aid to prioritisation. Unrestricted altruism is not so19common that we can afford to fritter it away on a plethora of feel-good projects of suboptimal efficacy. If benefiting humanity by increasing existential safety achievesexpected good on a scale many orders of magnitudegreater than that of alternative contributions, we woulddo well to focus on this most efficient philanthropy.Note that maxipok differs from the popular maximinprinciple (‘Choose the action that has the best worstcase outcome’).11 Since we cannot completely eliminateexistential risk—at any moment, we might be tossedinto the dustbin of cosmic history by the advancingfront of a vacuum phase transition triggered in someremote galaxy a billion years ago—the use of maximinin the present context would entail choosing the actionthat has the greatest benefit under the assumption ofimpending extinction. Maximin thus implies that weought all to start partying as if there were no tomorrow.That implication, while perhaps tempting, is implausible.2. Classification of existential riskTo bring attention to the full spectrum of existential risk,we can distinguish four classes of such risk: humanextinction, permanent stagnation, flawed realisation, andsubsequent ruination. We define these in Table 1 below:By ‘humanity’ we here mean Earth-originating intelligent life and by ‘technological maturity’ we mean theattainment of capabilities affording a level of economicproductivity and control over nature close to the maximum that could feasibly be achieved.Human extinctionAlthough it is conceivable that, in the billion or so yearsduring which Earth might remain habitable before beingoverheated by the expanding sun, a new intelligent species would evolve on our planet to fill the niche vacatedby an extinct humanity, this is very far from certain tohappen. The probability of a recrudescence of intelligentlife is reduced if the catastrophe causing the extinctionof the human species also exterminated the great apesTable 1. Classes of existential risk12Human extinctionHumanity goes extinct prematurely, i.e., before reaching technological maturity.Permanent stagnationHumanity survives but never reaches technological maturity.Subclasses: unrecovered collapse, plateauing, recurrent collapseFlawed realisationHumanity reaches technological maturity but in a way that is dismally and irremediably flawed.Subclasses: unconsummated realisation, ephemeral

Nick Bostrom University of Oxford Abstract Existential risks are those that threaten the entire future of humanity. Many theories of value imply that even relatively small reductions in net existential risk have enormous expected value. Despite their importance, issues surrounding human-extinction risks and related hazards remain poorly understood.

Related Documents:

Different from other European existential psychologists, Viktor Frankl (1905-1997) was the first to emphasize positive existential givens. This is remarkable, because personally he experienced more horrors and sufferings than any of the other existential philosophers and psychologists. Frankl spent 1942-1945 in Nazi concentration camps.

4 Abstract This thesis carries out an existential analysis of Samuel Beckett's Waiting for Godot and Franz Kafka's The Metamorphosis through the complex lenses of existential absurdity and alienation and analyses how the use of these existential notions reflects relevant exis

Counselling at the Existential Academy in London, where she is Principal. Her best sellers include Everyday Mysteries (Routledge), Paradox and Passion in Psychotherapy (Wiley), and Existential Counselling and Psychotherapy in Practice (Sage). Editors Erik Craig is an existential psychol

special existential dichotomy of nature and culture, and in the personality theory of Alfred Adler towards more special existential dichotomy of superiority and com-munity. objectives The main objective of this article is to show that the new existential criterion of nor-mal and abnormal personality based on the works of Fromm is implicitly present

2. To obtain the Dimensions of Existential Relatedness and Growth needs. 3.Erg Motivational Needs (Alderfer's Erg Model): The acronym ERG stands for three groups of fundamental needs, they are Existential, Relatedness and Growth needs. These needs are line up with the Malow‟s hierarchy of needs. But Alderfer developed the needs into a

Risk Matrix 15 Risk Assessment Feature 32 Customize the Risk Matrix 34 Chapter 5: Reference 43 General Reference 44 Family Field Descriptions 60 ii Risk Matrix. Chapter 1: Overview1. Overview of the Risk Matrix Module2. Chapter 2: Risk and Risk Assessment3. About Risk and Risk Assessment4. Specify Risk Values to Determine an Overall Risk Rank5

Risk is the effect of uncertainty on objectives (e.g. the objectives of an event). Risk management Risk management is the process of identifying hazards and controlling risks. The risk management process involves four main steps: 1. risk assessment; 2. risk control and risk rating; 3. risk transfer; and 4. risk review. Risk assessment

Archaeological Research & Consultancy at the University of Sheffield Graduate School of Archaeology West Court 2 Mappin Street Sheffield S1 4DT Phone 0114 2225106 Fax 0114 2797158 Project Report 873b.3(1) Archaeological Building Recording and Watching Brief: Manor Oaks Farm, Manor Lane, Sheffield, South Yorkshire Volume 1: Text and Illustrations July 2007 By Mark Douglas and Oliver Jessop .