The Philosophical Foundations Of Artificial Intelligence

3y ago
37 Views
2 Downloads
314.79 KB
27 Pages
Last View : 1d ago
Last Download : 3m ago
Upload by : Sasha Niles
Transcription

The Philosophical Foundations of Artificial IntelligenceSelmer Bringsjord and Konstantine ArkoudasDepartment of Cognitive ScienceDepartment of Computer ScienceRensselaer Polytechnic Institute (RPI)Troy NY 12180 USAOctober 25, 2007Contents1 Introduction22 What is AI?23 Philosophical AI: The Example of Mechanizing Reasoning34 Historical and Conceptual Roots of AI65 Computational Theories of Mind and the Problem of Mental Content86 Philosophical Issues137 Connectionism and dynamical systems178 AI From Below: Situated Intelligence199 The Future of AI22References241

1IntroductionThe structure of this chapter is as follows. The next section (2) is devoted to explaining, inbroad strokes, what artificial intelligence (AI) is, and that any plausible definition of the field mustrevolve around a key philosophical distinction between “strong” and “weak” AI.1 Next (section 3),we take up the long-standing dream of mechanizing human reasoning, which in singular fashioncontinues to inextricably link philosophy, logic, and AI. The following section (4) summarizes theconceptual origins of the field, after which, in section 5, we discuss the computational view of thehuman mind that has underpinned most AI work to date, and the associated problem of mentalcontent. Section 6 discusses a number of philosophical issues associated with classical AI, whilesection 8 discusses a relatively new approach to AI and cognitive science, and attempts to placethat approach in a broader philosophical context. Finally (section 9), the future of AI, from asomewhat philosophical standpoint, is briefly considered.2What is AI?This is itself a deep philosophical question, and attempts to systematically answer it fall withinthe foundations of AI as a rich topic for analysis and debate. Nonetheless, a provisional answercan be given: AI is the field devoted to building artifacts capable of displaying, in controlled,well-understood environments, and over sustained periods of time, behaviors that we considerto be intelligent, or more generally, behaviors that we take to be at the heart of what it is tohave a mind. Of course this “answer” gives rise to further questions, most notably, what exactlyconstitutes intelligent behavior, what it is to have a mind, and how humans actually manageto behave intelligently. The last question is empirical; it is for psychology and cognitive scienceto answer. It is particularly pertinent, however, because any insight into human thought mighthelp us to build machines that work similarly. Indeed, as will emerge in this article, AI andcognitive science have developed along parallel and inextricably interwoven paths; their storiescannot be told separately. The second question, the one that asks what is the mark of the mental,is philosophical. AI has lent significant urgency to it, and conversely, we will see that carefulphilosophical contemplation of this question has influenced the course of AI itself. Finally, the firstchallenge, that of specifying precisely what is to count as intelligent behavior, has traditionally beenmet by proposing particular behavioral tests whose successful passing would signify the presenceof intelligence.The most famous of these is what has come to be known as the Turing Test (TT), introducedby Turing (1950). In TT, a woman and a computer are sequestered in sealed rooms, and a humanjudge, ignorant as to which of the two rooms contains which contestant, asks questions by email(actually, by teletype, to use the original term) of the two. If, on the strength of returned answers,the judge can do no better than 50/50 when delivering a verdict as to which room houses whichplayer, we say that the computer in question has passed TT. According to Turing, a computer ableto pass TT should be declared a thinking machine.His claim has been controversial, although it seems undeniable that linguistic behavior of thesort required by TT is routinely taken to be at the heart of human cognition. Part of the controversystems from the unabashedly behaviorist presuppositions of the test. Block’s “Aunt Bertha” thoughtexperiment (1981) was intended to challenge these presuppositions, arguing that it is not only thebehavior of an organism that determines whether it is intelligent. We must also consider howthe organism achieves intelligence. That is, the internal functional organization of the system1The distinction is due to Searle (1980).2

must be taken into account. This was a key point of functionalism, another major philosophicalundercurrent of AI to which we will return later.Another criticism of TT is that it is unrealistic and may even have obstructed AI progress insofarit is concerned with disembodied intelligence. As we will see in the sequel, many thinkers haveconcluded that disembodied artifacts with human-level intelligence are a pipe dream—practicallyimpossible to build, if not downright conceptually absurd. Accordingly, Harnad (1991) insists thatsensorimotor capability is required of artifacts that would spell success for AI, and he proposesthe Total TT (TTT) as an improvement over TT. Whereas in TT a bodiless computer programcould, at least in principle, pass, TTT-passers must be robots able to operate in the physicalenvironment in a way that is indistinguishable from the behaviors manifested by embodied humanpersons navigating the physical world.When AI is defined as the field devoted to engineering artifacts able to pass TT, TTT, andvarious other tests,2 it can be safely said that we are dealing with weak AI. Put differently, weakAI aims at building machines that act intelligently, without taking a position on whether or notthe machines actually are intelligent.There is another answer to the What is AI? question: viz., AI is the field devoted to buildingpersons, period. As Charniak and McDermott (1985) put it in their classic introduction to AI:The ultimate goal of AI, which we are very far from achieving, is to build a person, or, morehumbly, an animal. (Charniak & McDermott 1985, p. 7)Notice that Charniak and McDermott don’t say that the ultimate goal is to build something thatappears to be a person. Their brand of AI is so-called strong AI, an ambitious form of the fieldaptly summed up by Haugeland:The fundamental goal [of AI research] is not merely to mimic intelligence or produce some cleverfake. Not at all. AI wants only the genuine article: machines with minds, in the full and literalsense. This is not science fiction, but real science, based on a theoretical conception as deep asit is daring: namely, we are, at root, computers ourselves. (Haugeland 1985b, p. 2)This “theoretical conception” of the human mind as a computer has served as the bedrock of moststrong-AI research to date. It has come to be known as the computational theory of the mind;we will discuss it in detail shortly. On the other hand, AI engineering that is itself informed byphilosophy, as in the case of the sustained attempt to mechanize reasoning, discussed in the nextsection, can be pursued in the service of both weak and strong AI.3Philosophical AI: The Example of Mechanizing ReasoningIt would not be unreasonable to describe Classical CognitiveScience as an extended attempt to apply the methods of prooftheory to the modelling of thought.Fodor and Pylyshyn (1988, pp. 29-30)This section is devoted to a discussion of an area that serves as an exemplar of AI that isbound up with philosophy (versus philosophy of AI). This is the area that any student of bothphilosophy and AI ought to be familiar with, first and foremost.3 Part of the reason for this is that2More stringent tests than TT and TTT are discussed by Bringsjord (1995). There has long been a traditionaccording to which AI should be considered the field devoted to building computational artifacts able to excel ontests of intelligence; see (Evans 1968, Bringsjord and Schimanski 2003).3There are other areas that might be discussed as well (e.g., learning), but no other area marks the genuinemarriage of AI and philosophy as deeply and firmly as that of mechanical reasoning.3

other problems in AI of at least a partially philosophical nature4 are intimately connected with theattempt to mechanize human-level reasoning.Aristotle considered rationality to be an essential characteristic of the human mind. Deductivethought, expressed in terms of syllogisms, was the hallmark of such rationality, as well as thefundamental intellectual instrument (“organon”) of all science. Perhaps the deepest contributionof Aristotle to artificial intelligence was the idea of formalism. The notion that certain patterns oflogical thought are valid by virtue of their syntactic form, independently of their content, was anexceedingly powerful innovation, and it is that notion that remains at the heart of the contemporarycomputational theory of the mind (Pylyshyn 1989) and what we have called strong AI above, andwhich will be elaborated in section 5.In view of the significance that was historically attached to deduction in philosophy (startingwith Aristotle and continuing with Euclid, and later Bacon, Hobbes, Leibniz, and others), the veryidea of an intelligent machine was often tantamount to a machine that can perform logical inference:one that can validly extract conclusions from given premises. Automated theorem proving, as thefield is known today, has thus been an integral part of AI from the very beginning, although, aswe will see, its relevance has been hotly debated, especially over the last two decades. Broadlyspeaking, the problem of mechanizing deduction can be couched in three different forms. Listed inorder of increasing difficulty, we have: Proof checking: Given a deduction D that purports to derive a conclusion P from a number of premisesP1 , . . . , Pn , decide whether or not D is sound. Proof discovery: Given a number of premises P1 , . . . , Pn and a putative conclusion P , decide whetherP follows logically from the premises, and if it does, produce a formal proof of it. Conjecture generation: Given a number of premises P1 , . . . , Pn , infer an “interesting” conclusion Pthat follows logically from the premises, and produce a proof of it.Technically speaking, the first problem is the easiest. In the case of predicate logic with equality,the problem of checking the soundness of a given proof is not only algorithmically solvable, butquite efficiently solvable. Nevertheless, the problem is pregnant with interesting philosophical andtechnical issues, and its relevance to AI was realized early on by McCarthy (1962), who wrote that“checking mathematical proofs is potentially one of the most interesting and useful applicationsof automatic computers.” For instance, insofar proofs are supposed to express reasoning, we canask whether the formalism in which the input proof D is expressed provides a good formal modelof deductive reasoning. Hilbert-style formal proofs (long lists of formulas, each of which is eitheran axiom or follows from previous formulas by one of a small number of inference rules) wereimportant as tools for metamathematical investigations, but did not capture deductive reasoningas practiced by humans. That provided the incentive for important research into logical formalismsthat mirrored human reasoning, particularly as carried out by mathematicians. S. Jáskowski (1934)devised a system of natural deduction that was quite successful in that respect. (Gentzen (1969)independently discovered similar systems, but with crucial differences from Jáskowski’s work.5 )4Many examples can be given. One is the frame problem, which we discuss in section 6. Another is defeasiblereasoning, which is the problem of how to formalize inference in the face of the fact that much everyday reasoningonly temporarily commits us to conclusions, in light of the fact that newly arrived knowledge often defeats priorarguments. E.g., you no doubt currently believe that your domicile is perfectly intact (and could if pressed give anargument in defense of your belief), but if you suddenly learned that a vicious tornado had recently passed throughyour town (city, county, etc.), you might retract that belief, or at least change the strength of it. Defeasible reasoninghas been studied and (to a degree) mechanized by the philosopher John Pollock (1992), whose fundamental approachin this area aligns with that of Chisholm (1966) to defeasible inference.5The system of Jáskowski dealt with hypotheses by introducing a crucial notion of scope. Gentzen worked withsequents instead. See Pelletier (1999) for a detailed discussion.4

The ideas of natural deduction introduced by Jáskowski and Gentzen later played a key role, notonly in theorem proving and AI, but in computational cognitive science as well. Mental logic(Osherson 1975, Braine and O’Brien 1998, Rips 1994), in particular, a family of computationalcognitive theories of human deductive reasoning, was heavily influenced by natural deduction.The second problem is considerably harder. Early results in recursive function theory (Turing1936, Church 1936) established that there is no Turing machine which can decide whether anarbitrary formula of first-order logic is valid (that was Hilbert’s Entscheidungsproblem). Therefore,by Church’s thesis, it follows that the problem is algorithmically unsolvable—there is no generalmechanical method that will always make the right decision in a finite amount of time. However,humans have no guarantee of always solving the problem either (and indeed often fail to do so).Accordingly, AI can look for conservative approximations that are as good as they can possibly get:programs that give the right answer as often as possible, and otherwise do not give an answer at all(either failing explicitly, or else going on indefinitely until we stop them). The problem was tackledearly on for weaker formalisms with seemingly promising results: The Logic Theorist (LT) of Newell,Simon, and Shaw, presented at the inaugural 1956 AI conference at Dartmouth mentioned earlier,managed to prove 38 out of the 52 propositional-logic theorems of Principia Mathematica. Othernotable early efforts included an implementation of Presburger arithmetic by Martin Davis in 1954at Princeton’s Institute for Advanced Studies (Davis 2001), the Davis-Putnam procedure (M. Davisand H. Putnam 1960), variations of which are used today in many satisfiability-based provers, andan impressive system for first-order logic built by Wang (1960). It should be noted that whereas LTwas intentionally designed to simulate human reasoning and problem-solving processes, the authorsof these other systems believed that mimicking human processes was unnecessarily constraining,and that better results could be achieved by doing away with cognitive plausibility. This was anearly manifestation of a tension that is still felt in the field and which parallels the distinctionbetween strong and weak forms of AI: AI as science, particularly as the study of human cognition,vs. AI as engineering—the construction of intelligent systems whose operation need not resemblehuman thought.Robinson’s discovery of unification and the resolution method (Robinson 1965) provided amajor boost to the field. Most automated theorem provers today are based on resolution.6 Otherprominent formalisms include semantic tableaux and equational logic (Robinson and Voronkov2001). While there has been an impressive amount of progress over the last 10 years, largely spurredby the annual CADE ATP system competition,7 the most sophisticated ATPs today continue tobe brittle, and often fail on problems that would be trivial for college undergraduates.The third problem, that of conjecture generation, is the most difficult, but it is also the mostinteresting. Conjectures do not fall from the sky, after all. Presented with a body of information,humans—particularly mathematicians—regularly come up with interesting conjectures and thenoften set out to prove those conjectures, usually with success. This discovery process (along withnew concept formation) is one of the most creative activities of the human intellect. The sheerdifficultly of simulating this creativity computationally is surely a chief reason why AI has achievedrather minimal progress here. But another reason is that throughout most of the previous century(and really beginning with Frege in the nineteenth century), logicians and philosophers were concerned almost exclusively with justification rather than with discovery. This applied not only todeductive reasoning but to inductive reasoning as well, and indeed to scientific theorizing in general(Reichenbach 1938). It was widely felt that the discovery process should be studied by psychologists,6For instance, systems such as Spass (Weidenbach 2001), Vampire (Voronkov 1995), and Otter (Wos, Overbeek,Lusk and Boyle 1992).7CADE is an acronym for Conference on Automated Deduction; for more information on the annual CADE ATPcompetition, see Pelletier, Sutcliffe and Suttner (2002).5

not by philosophers and logicians. Interestingly, this was not the case prior to Frege. Philosopherssuch as Descartes (1988), Bacon (2002), Mill (1874), and Peirce (1960) had all attempted to studythe discovery process rationally and to formulate rules for guiding it. Beginning with Hanson (1958)in science and with Lakatos (1976) in mathematics, philosophers started re-emphasizing discovery.8AI researchers also attempted to model discovery computationally, both in science (Langley, Simon,Bradshaw and Zytkow 1987) and in mathematics (Lenat 1976, Lenat 1983), and this line of work hasled to machine-learning innovations in AI such as genetic programming (Koza 1992) and inductivelogic programming (Muggleton 1992). However, the successes have been limited, and fundamentalobjections to algorithmic treatments of discovery and creativity in general—e.g., such as put forthby Hempel (1985)—remain trenchant. A major issue is the apparently holistic character of highercognitive processes such as creative reasoning, and the difficulty of formulating a rigorous characterization of relevance. Without a precise notion of relevance, one that is amenable to computationalimplementation, there seems to be little hope for progress on the conclusion generation problem,or on any of the other similar problems, including concept generation and abductive hypothesisformation.Faced with relatively meager progress on the hard reasoning problems, and perhaps influencedby various other critiques of symbolic AI (see Section 6), some AI researchers have launched seriousattacks on formal logic, which they have criticized as an overly rigid system that does not provide agood model of human reasoning mechanisms, which are eminently flexible. They have accordinglytried to shift the field’s attention and efforts away from rigorous deductive and inductive reasoning,turning them toward “commonsense reasoning” instead. For instance, Minsky (1986, p. 167) writes:For generations, scientists and philosophers have tried to explain ordinary reasoningin terms of logical principles with virtually no success. I suspect this enterprise failedbecause it was looking in the wrong direction: common sense works so well not becauseit is an approximation of logic; logic is only a small part of our great accumulation ofdifferent, useful ways to chain things together.A good deal of work has been made toward developing formal, rigorous logical systems for modelingcommonsense reasoning (Davis and Morgenstern 2004). However, critics charge that such effortsmiss the greater point. For instance, Winograd (1990) writes that “Minsky places the blame forlack of success in explaining ordinary reasoning on the rigidity of logic, and does not raise the morefundamental questions about the nature of all symbolic representations and of formal (thoughpossibly non-logical) systems of rules for manipulating them. There are basic limits to what canbe done with symbol manipulation, regardless of how many ‘different,

human mind that has underpinned most AI work to date, and the associated problem of mental content. Section 6 discusses a number of philosophical issues associated with classical AI, while section 8 discusses a relatively new approach to AI and cognitive science, and attempts to place that approach in a broader philosophical context.

Related Documents:

May 02, 2018 · D. Program Evaluation ͟The organization has provided a description of the framework for how each program will be evaluated. The framework should include all the elements below: ͟The evaluation methods are cost-effective for the organization ͟Quantitative and qualitative data is being collected (at Basics tier, data collection must have begun)

Silat is a combative art of self-defense and survival rooted from Matay archipelago. It was traced at thé early of Langkasuka Kingdom (2nd century CE) till thé reign of Melaka (Malaysia) Sultanate era (13th century). Silat has now evolved to become part of social culture and tradition with thé appearance of a fine physical and spiritual .

On an exceptional basis, Member States may request UNESCO to provide thé candidates with access to thé platform so they can complète thé form by themselves. Thèse requests must be addressed to esd rize unesco. or by 15 A ril 2021 UNESCO will provide thé nomineewith accessto thé platform via their émail address.

̶The leading indicator of employee engagement is based on the quality of the relationship between employee and supervisor Empower your managers! ̶Help them understand the impact on the organization ̶Share important changes, plan options, tasks, and deadlines ̶Provide key messages and talking points ̶Prepare them to answer employee questions

Dr. Sunita Bharatwal** Dr. Pawan Garga*** Abstract Customer satisfaction is derived from thè functionalities and values, a product or Service can provide. The current study aims to segregate thè dimensions of ordine Service quality and gather insights on its impact on web shopping. The trends of purchases have

Chính Văn.- Còn đức Thế tôn thì tuệ giác cực kỳ trong sạch 8: hiện hành bất nhị 9, đạt đến vô tướng 10, đứng vào chỗ đứng của các đức Thế tôn 11, thể hiện tính bình đẳng của các Ngài, đến chỗ không còn chướng ngại 12, giáo pháp không thể khuynh đảo, tâm thức không bị cản trở, cái được

Le genou de Lucy. Odile Jacob. 1999. Coppens Y. Pré-textes. L’homme préhistorique en morceaux. Eds Odile Jacob. 2011. Costentin J., Delaveau P. Café, thé, chocolat, les bons effets sur le cerveau et pour le corps. Editions Odile Jacob. 2010. Crawford M., Marsh D. The driving force : food in human evolution and the future.

Le genou de Lucy. Odile Jacob. 1999. Coppens Y. Pré-textes. L’homme préhistorique en morceaux. Eds Odile Jacob. 2011. Costentin J., Delaveau P. Café, thé, chocolat, les bons effets sur le cerveau et pour le corps. Editions Odile Jacob. 2010. 3 Crawford M., Marsh D. The driving force : food in human evolution and the future.