Artificial Intelligence Hardware New Opportunities For Semiconductor .

1y ago
1 Views
1 Downloads
1.03 MB
12 Pages
Last View : 30d ago
Last Download : 3m ago
Upload by : Grant Gall
Transcription

Artificial-intelligence hardware:New opportunities forsemiconductor companiesArtificial intelligence is opening the best opportunities for semiconductorcompanies in decades. How can they capture this value?Gaurav Batra, Zach Jacobson, Siddarth Madhav, Andrea Queirolo, and Nick SanthanamDECEMBER 2018 HIGH TECH DuKai photographer/Getty Images

Software has been the star of high tech over thepast few decades, and it’s easy to understand why.With PCs and mobile phones, the game-changinginnovations that defined this era, the architectureand software layers of the technology stack enabledseveral important advances. In this environment,semiconductor companies were in a difficultposition. Although their innovations in chip designand fabrication enabled next-generation devices,they received only a small share of the value comingfrom the technology stack—about 20 to 30 percentwith PCs and 10 to 20 percent with mobile.But the story for semiconductor companies could bedifferent with the growth of artificial intelligence(AI)—typically defined as the ability of a machine toperform cognitive functions associated with humanminds, such as perceiving, reasoning, and learning.Many AI applications have already gained a widefollowing, including virtual assistants that manageour homes and facial-recognition programs that trackcriminals. These diverse solutions, as well as otheremerging AI applications, share one common feature:a reliance on hardware as a core enabler of innovation,especially for logic and memory functions.What will this development mean for semiconductorsales and revenues? And which chips will be mostimportant to future innovations? To answer thesequestions, we reviewed current AI solutions and thetechnology that enables them. We also examinedopportunities for semiconductor companies acrossthe entire technology stack. Our analysis revealedthree important findings about value creation: AI could allow semiconductor companies tocapture 40 to 50 percent of total value fromthe technology stack, representing the bestopportunity they’ve had in decades. Storage will experience the highest growth, butsemiconductor companies will capture mostvalue in compute, memory, and networking.2 To avoid mistakes that limited value capturein the past, semiconductor companies mustundertake a new value-creation strategythat focuses on enabling customized, endto-end solutions for specific industries, or“microverticals.”By keeping these beliefs in mind, semiconductorleaders can create a new road map for winning in AI.This article begins by reviewing the opportunitiesthat they will find across the technology stack,focusing on the impact of AI on hardware demandat data centers and the edge (computing thatoccurs with devices, such as self-driving cars). Itthen examines specific opportunities withincompute, memory, storage, and networking. Thearticle also discusses new strategies that can helpsemiconductor companies gain an advantage in theAI market, as well as issues they should consider asthey plan their next steps.The AI technology stack will open manyopportunities for semiconductor companiesAI has made significant advances since its emergencein the 1950s, but some of the most importantdevelopments have occurred recently as developerscreated sophisticated machine-learning (ML)algorithms that can process large data sets, “learn”from experience, and improve over time. The greatestleaps came in the 2010s because of advances indeep learning (DL), a type of ML that can process awider range of data, requires less data preprocessingby human operators, and often produces moreaccurate results.To understand why AI is opening opportunitiesfor semiconductor companies, consider thetechnology stack (Exhibit 1). It consists of ninediscrete layers that enable the two activities thatenable AI applications: training and inference (seesidebar “Training and inference”). When developersare trying to improve training and inference, theyoften encounter roadblocks related to the hardwareArtificial-intelligence hardware: New opportunities for semiconductor companies

Universal 2018AI: semiconductorsExhibit 1 of 6Exhibit 1The technology stack for artificial intelligence (AI) contains nine layers.TechnologyStackDefinitionServicesSolution anduse caseIntegrated solutions that include training data,models, hardware, and other components (eg,voice-recognition systems)TrainingData typesData presented to AI systems for analysisPlatformMethodsTechniques for optimizing weights given to model inputsArchitectureStructured approach to extract features from data (eg,convolutional or recurrent neural networks)AlgorithmA set of rules that gradually modifies the weights givento certain model inputs within the neural network duringtraining to optimize inferenceFrameworkSoftware packages to define architectures and invokealgorithms on the hardware through the interfaceInterfaceInterface systemsSystems within framework that determine and facilitatecommunication pathways between software andunderlying hardwareHardwareHead nodeHardware unit that orchestrates and coordinatescomputations among acceleratorsAcceleratorSilicon chip designed to perform highly parallel operationsrequired by AI; also enables simultaneous computationsMemory Electronic data repository forshort-term storage duringprocessing Memory typically consists ofDRAM1Storage Electronic repository for long-termstorage of large data sets Storage typically consists ofNAND2Logic Processor optimized to calculateneural network operations, ie,convolution and matrixmultiplication Logic devices are typically CPU,GPU, FPGA, and/or ASIC3Networking Switches, routers, and otherequipment used to link servers inthe cloud and to connect edgedevices1 Dynamic random access memory.2 Not AND.3 CPU central processing unit, GPU graphics-processing unit, FPGA field programmable gate array, ASIC application-specific integrated circuit.Source: Expert interviews; literature searchlayer, which includes storage, memory, logic, andnetworking. By providing next-generation acceleratorarchitectures, semiconductor companies couldincrease computational efficiency or facilitate thetransfer of large data sets through memory andstorage. For instance, specialized memory for AI has4.5 times more bandwidth than traditional memory,making it much better suited to handling the vaststores of big data that AI applications require. Thisperformance improvement is so great that manycustomers would be more willing to pay the higherprice that specialized memory requires (about 25 pergigabyte, compared with 8 for standard memory).AI will drive a large portion of semiconductorrevenues for data centers and the edgeWith hardware serving as a differentiator in AI,semiconductor companies will find greater demandfor their existing chips, but they could also profit bydeveloping novel technologies, such as workloadspecific AI accelerators (Exhibit 2). We created amodel to estimate how these AI opportunities wouldaffect revenues and to determine whether AI-relatedchips would constitute a significant portion of futuredemand (see sidebar “How we estimated value” formore information on our methodology).Artificial-intelligence hardware: New opportunities for semiconductor companies3

Universal 2018AI: semiconductorsExhibit 2 of 6Exhibit 2Companies will find many opportunities in the artificial intelligence (AI) market,with leaders already emerging.Opportunities in existing marketPotential new opportunitiesCompute Accelerators for parallel processing, such asGPUs1 and FPGAs2 Workload-specific AI acceleratorsMemory High-bandwidth memoryOn-chip memory (SRAM3) Emerging non-volatile memory (NVM)(as memory device)Storage Potential growth in demand for existing storagesystems as more data is retained AI-optimized storage systemsEmerging NVM (as storage device)Networking Infrastructure for data centers Programmable switchesHigh-speed interconnect1 Graphics-processing units. 2 Field programmable gate arrays. 3 Static random access memory.Source: McKinsey analysisOur research revealed that AI-related semiconductors will see growth of about 18 percent annuallyover the next few years—five times greater than therate for semiconductors used in non-AI applications(Exhibit 3). By 2025, AI-related semiconductorscould account for almost 20 percent of all demand,which would translate into about 67 billion inrevenue. Opportunities will emerge at both datacenters and the edge. If this growth materializesas expected, semiconductor companies will bepositioned to capture more value from the AItechnology stack than they have obtained withprevious innovations—about 40 to 50 percent ofthe total.AI will drive most growth in storage, but thebest opportunities for value-creation lie inother segmentsWe then took our analysis a bit further by lookingat specific opportunities for semiconductor playerswithin compute, memory, storage, and networking.For each area, we examined how hardware demandis evolving at both data centers and the edge. We also4quantified the growth expected in each categoryexcept networking, where AI-related opportunitiesfor value capture will be relatively small forsemiconductor companies.ComputeCompute performance relies on central processingunits (CPUs) and accelerators—graphics-processingunits (GPUs), field programmable gate arrays(FPGAs), and application-specific integratedcircuits (ASICs). Since each use case has differentcompute requirements, the optimal AI hardwarearchitecture will vary. For instance, route-planningapplications have different needs for processingspeed, hardware interfaces, and other performancefeatures than applications for autonomous driving orfinancial risk stratification (Exhibit 4).Overall, demand for compute hardware will increaseby about 10 to 15 percent through 2025 (Exhibit 5).After analyzing more than 150 DL use cases, lookingat both inference and training requirements, wewere able to identify the architectures most likely togain ground in data centers and the edge (Exhibit 6).Artificial-intelligence hardware: New opportunities for semiconductor companies

Data-center usage. Most compute growth will stemfrom higher demand for AI applications at cloudcomputing data centers. At these locations, GPUsare now used for almost all training applications.We expect that they will soon begin to lose marketshare to ASICs, until the compute market is aboutevenly divided between these solutions by 2025. AsASICs enter the market, GPUs will likely becomemore customized to meet the demands of DL. Inaddition to ASICs and GPUs, FPGAs will have a smallrole in future AI training, mostly for specializeddata-center applications that must reach the marketquickly or require customization, such as those forprototyping new DL applications.For inference, CPUs now account for about 75 percentof the market. They’ll lose ground to ASICs as DLapplications gain traction. Again, we expect to seean almost equal divide in the compute market, withCPUs accounting for 50 percent of demand in 2025and ASICs for 40 percent.Edge applications. Most edge training now occurson laptops and other personal computers, but moredevices may begin recording data and playing a rolein on-site training. For instance, drills used duringoil and gas exploration generate data related to awell’s geological characteristics that could be used totrain models. For accelerators, the training marketis now evenly divided between CPUs and ASICs. Inthe future, however, we expect that ASICs builtinto systems on chips will account for 70 percent ofdemand. FPGAs will represent about 20 percent ofdemand and will be used for applications that requiresignificant customization.When it comes to inference, most edge devices nowrely on CPUs or ASICs, with a few applications—suchas autonomous cars—requiring GPUs. By 2025, weexpect that ASICs will account for about 70 percentof the edge inference market and GPUs 20 percent.Training and inferenceAll AI applications must be capable of trainingand inference. To understand the importance ofthese tasks, consider their role in helping selfdriving cars avoid obstacles. During the trainingphase, developers present images to the neuralnet—for instance, those of dogs or pedestrians—and perform recognition tests. They then refinenetwork parameters until the neural net displayshigh accuracy in visual detection. After the networkhas viewed millions of images and is fully trained, itenables recognition of dogs and pedestrians duringthe inference phase.The cloud is an ideal location for training because itprovides access to vast stores of data from multipleservers—and the more information an AI applicationreviews during training, the better its algorithm willbecome. Further, the cloud can reduce expensesbecause it allows graphics-processing units (GPUs)and other expensive hardware to train multiple AImodels. Since training occurs intermittently on eachmodel, capacity is not an issue.With inference, AI algorithms handle less data butmust generate responses more rapidly. A self-drivingcar doesn’t have time to send images to the cloudfor processing once it detects an object in the road,nor do medical applications that evaluate critically illpatients have leeway when interpreting brain scansafter a hemorrhage. And that makes the edge, orin-device computing, the best choice for inference.Artificial-intelligence hardware: New opportunities for semiconductor companies5

Universal 2018AI: semiconductorsExhibit 3 of 6Exhibit 3Growth for semiconductors related to artificial intelligence (AI) is expectedto be five times greater than growth in the remainder of the market.AI semiconductor total availablemarket,1 billionAI semiconductor total availablemarket, %AINon-AI18–1936276524017223Estimated AI semiconductor totalavailable market CAGR,2 25E320172020E2025ENon-AIAI1 Total available market includes processors, memory, and storage; excludes discretes, optical, and micro-electrical-mechanical systems.2 Compound annual growth rate.3 E estimated.Source: Bernstein; Cisco Systems; Gartner; IC Insights; IHS Markit; Machina Research; McKinsey analysisMemoryAI applications have high memory-bandwidthrequirements, since computing layers within deepneural networks must pass input data to thousandsof cores as quickly as possible. Memory is required—typically dynamic random access memory (DRAM)—to store input data, weight model parameters, andperform other functions during both inferenceand training. Consider a model being trained torecognize the image of a cat. All intermediate resultsin the recognition process—for example, colors,contours, textures—need to reside on memory asthe model fine-tunes its algorithms. Given theserequirements, AI will create a strong opportunity forthe memory market, with value expected to increasefrom 6.4 billion in 2017 to 12.0 billion in 2025.6That said, memory will see the lowest annual growthof the three accelerator categories—about 5 to10 percent — because of efficiencies in algorithmdesign, such as reduced bit precision, as well ascapacity constraints in the industry relaxing.Most short-term memory growth will result fromincreased demand at data centers for the highbandwidth DRAM required to run AI, ML, andDL algorithms. But over time, the demand for AImemory at the edge will increase—for instance,connected cars may need more DRAM.Current memory is typically optimized for CPUs,but developers are now exploring new architectures.Solutions that are attracting more interest includethe following:Artificial-intelligence hardware: New opportunities for semiconductor companies

Universal 2018AI: semiconductorsUniversal 2018Exhibit 5 of 6AI: semiconductorsExhibit 4 of 6Exhibit4 5ExhibitAt both data centers and the edge, demand for training andinference hardware is growing.The optimal compute architecture will vary by use case.Example , ngspeedLessInferenceSmall size;form , total market, billionProcessing/wattLanguage translation,speech understanding4–5FlexibilityLow cost iningProcessing/ spent4–4.5FacedetectionFinancial 5CPUASICGPU 0.1 FPGA 0.12017Inference1–1.5 1Route planning ASICFPGASource: Expert interviews; McKinsey analysis1 Can use interfaces and data from earlier versions of the system.2 Graphics-processing unit.3 Application-specific integrated circuit.4 Central processing unit.5 Field-programmable gate array.Source: McKinsey analysis High-bandwidth memory (HBM). Thistechnology allows AI applications to processlarge data sets at maximum speed whileminimizing power requirements. It allowsDL compute processors to access a threedimensional stack of memory through a fastconnection called through-silicon via (TSV).AI chip leaders such as Google and Nvidiahave adopted HBM as the preferred memorysolution, although it costs three times morethan traditional DRAM per gigabyte—a movethat signals their customers are willing topay for expensive AI hardware in return forperformance gains.1 On-chip memory. For a DL compute processor,storing and accessing data in DRAM or otheroutside memory sources can take 100 timesmore time than memory on the same chip. WhenGoogle designed the tensor-processing unit(TPU), an ASIC specialized for AI, it includedenough memory to store an entire model onthe chip.2 Start-ups such as Graphcore are alsoincreasing on-chip memory capacity, taking it toArtificial-intelligence hardware: New opportunities for semiconductor companies7

Universal 2018AI: semiconductorsExhibit 5 of 6Exhibit 5At both data centers and the edge, demand for training andinference hardware is growing.Data center, total market, billionEdge, total market, raining4–4.51–1.5 1 0.1 0.120172025201720252017202520172025Source: Expert interviews; McKinsey analysisa level about 1,000 times more than what is foundon a typical GPU, through a novel architecturethat maximizes the speed of AI calculations.The cost of on-chip memory is still prohibitivefor most applications, and chip designers mustaddress this challenge.StorageAI applications generate vast volumes of data—about80 exabytes per year, which is expected to increaseto 845 exabites by 2025. In addition, developers arenow using more data in AI and DL training, whichalso increases storage requirements. These shiftscould lead to annual growth of 25 to 30 percentfrom 2017 to 2025 for storage—the highest rate ofall segments we examined. 3 Manufacturers willincrease their output of storage accelerators inresponse, with pricing dependent on supply stayingin sync with demand.Unlike traditional storage solutions that tend totake a one-size-fits-all approach across different use8cases, AI solutions must adapt to changing needs—and those depend on whether an application is usedfor training or inference. For instance, AI trainingsystems must store massive volumes of data as theyrefine their algorithms, but AI inference systemsonly store input data that might be useful in futuretraining. Overall, demand for storage will be higherfor AI training than inference.One potential disruption in storage is new forms ofnon-volatile memory (NVM). New forms of NVMhave characteristics that fall between traditionalmemory, such as DRAM, and traditional storage, suchas NAND flash. They can promise higher density thanDRAM, better performance than NAND, and betterpower consumption than both. These characteristicswill enable new applications and allow NVM tosubstitute for DRAM and NAND in others. Themarket for these forms of NVM are currently small—representing about 1 billion to 2 billion in revenueover the next two years—but it is projected to accountfor more than 10 billion in revenue by 2025.Artificial-intelligence hardware: New opportunities for semiconductor companies

developers are investigating other options, includingprogrammable switches that can route data indifferent directions. This capability will accelerateone of the most important training tasks: the needto resynchronize input weights among multipleservers whenever model parameters are updated.With programmable switches, resynchronizationcan occur almost instantly, which could increasetraining speed from two to ten times. The greatestperformance gains would come with large AI models,which use the most servers.The NMV category includes multiple technologies,all of which differ in terms of memory accesstime and cost, and are all in various stages.Magnetoresistive random-access memory (MRAM)has the lowest latency for read and write, withgreater than five-year data retention and excellentendurance. However, its capacity scaling is limited,making it a costly alternative that may be used forfrequently accessed caches rather than a long-termdata-retention solution. Resistive random-accessmemory (ReRAM) could potentially scale vertically,giving it an advantage in scaling and cost, but ithas slower latency and reduced endurance. Phasechange memory (PCM) fits in between the two, with3D XPoint being the most well-known example.Endurance and error rate will be key barriers thatmust be overcome before more widespread adoption.Another option to improve networking involvesusing high-speed interconnections in servers.This technology can produce a threefoldimprovement in performance, but it’s also about35 percent more expensive.NetworkingAI applications require many servers during training,and the number increases with time. For instance,developers only need one server to build an initialAI model and under 100 to improve its structure. Buttraining with real data—the logical next step—couldrequire several hundred. Autonomous-drivingmodels require over 140 servers to reach 97 percentaccuracy in detecting obstacles.Semiconductor companies need new strategiesfor the AI marketIt’s clear that opportunities abound, but success isn’tguaranteed for semiconductor players. To capturethe value they deserve, they’ll need to focus on endto-end solutions for specific industries (also calledmicrovertical solutions), ecosystem development,and innovation that goes far beyond improvingcompute, memory, and networking technologies.If the speed of the network connecting servers isslow—as is usually the case—it will cause trainingbottlenecks. Although most strategies for improvingnetwork speed now involve data-center hardware,Customers will value end-to-end solutions formicroverticals that deliver a strong return oninvestmentAI hardware solutions are only useful if they’reHow we estimated valueWe took a bottom-up approach to estimate the valueat stake for semiconductor companies. Consideraccelerators used for compute functions. First, wedetermined the percent of servers in data centersthat were used for AI. We then identified the type oflogic device they commonly used and the averagesales price for related accelerators. For edgecomputing, we conducted a similar review, but wefocused on determining the number of devices thatwere used for AI, rather than servers. By combiningour insights for data centers and edge devices, wecould estimate the potential value for semiconductorcompanies related to compute functions.Artificial-intelligence hardware: New opportunities for semiconductor companies9

Universal 2018AI: semiconductorsExhibit 6 of 6Exhibit 6The preferred architectures for compute are shifting in data centers and the edge.Data-center architecture, %ASIC1CPU2Inference75Edge architecture, 2025201720251 Application-specific integrated circuit.2 Central processing unit.3 Field programmable gate array.4 Graphics-processing unit.Source: Expert interviews; McKinsey analysiscompatible with all other layers of the technologystack, including the solutions and use cases in theservices layer. Semiconductor companies can taketwo paths to achieve this goal, and a few have alreadybegun doing so. First, they could work with partnersto develop AI hardware for industry-specific usecases, such as oil and gas exploration, to createan end-to-end solution. For example, Mythic hasdeveloped an ASIC to support edge inference forimage- and voice-recognition applications withinthe healthcare and military industries. Alternatively,semiconductor companies could focus on developingAI hardware that enables broad, cross-industrysolutions, as Nvidia does with GPUs.The path taken will vary by segment. With memoryand storage players, solutions tend to have the same10technology requirements across microverticals. Incompute, by contrast, AI algorithm requirementsmay vary significantly. An edge accelerator inan autonomous car must process much differentdata from a language-translation application thatrelies on the cloud. Under these circumstances,companies cannot rely on other players to buildother layers of the stack that will be compatible withtheir hardware.Active participation in ecosystems is vital forsuccessSemiconductor players will need to create anecosystem of software developers that prefer theirhardware by offering products with wide appeal.In return, they’ll have more influence over designchoices. For instance, developers who prefer aArtificial-intelligence hardware: New opportunities for semiconductor companies

certain hardware will use that as a starting pointwhen building their applications. They’ll then lookfor other components that are compatible with it.To help draw software developers into theirecosystem, semiconductor companies should reducecomplexity whenever possible. Since there are nowmore types of AI hardware than ever, including newaccelerators, players should offer simple interfacesand software-platform capabilities. For instance,Nvidia provides developers with Compute UnifiedDevice Architecture, a parallel-computing platformand application programming interface (API) thatworks with multiple programming languages. Itallows software developers to use Compute UnifiedDevice Architecture–enabled GPUs for generalpurpose processing. Nvidia also provides softwaredevelopers with access to a collection of primitivesfor use in DL applications. The platform has nowbeen deployed across thousands of applications.Within strategically important industrysectors, Nvidia also offers customized softwaredevelopment kits. To assist with the developmentof software for self-driving cars, for instance,Nvidia created DriveWorks, a kit with ready-to-usesoftware tools, including object-detection librariesthat can help applications interpret data fromcameras and sensors in self-driving cars.As preference for certain hardware architecturesbuilds throughout the developer community,semiconductor companies will see their visibilitysoar, resulting in better brand recognition. They’llalso see higher adoption rates and greater customerloyalty, resulting in lasting value.Only platforms that add real value to end users willbe able to compete against comprehensive offeringsfrom large high-tech players, such as Google’sTensorFlow, an open-source library of ML and DLmodels and algorithms.4 TensorFlow supportsGoogle’s core products, such as Google Translate,and also helps the company solidify its positionwithin the AI technology stack, since TensorFlow iscompatible with multiple compute accelerators.Innovation is paramount and players mustgo up the stackMany hardware players who want to enable AIinnovation focus on improving the computationprocess. Traditionally, this strategy has involvedoffering optimized compute accelerators orstreamlining paths between compute and datathrough innovations in memory, storage, andnetworking. But hardware players should go beyondthese steps and seek other forms of innovation bygoing up the stack. For example, AI-based facialrecognition systems for secure authentication onsmartphones were enabled by specialized softwareand a 3-D sensor that projects thousands of invisibledots to capture a geometric map of a user’s face.Because these dots are much easier to processthan several millions of pixels from cameras, theseauthentication systems work in a fraction of asecond and don’t interfere with the user experience.Hardware companies could also think about howsensors or other innovative technologies can enableemerging AI use cases.Semiconductor companies must define their AIstrategy nowSemiconductor companies that are first movers inthe AI space will be more likely to attract and retaincustomers and ecosystem partners—and that couldprevent later entrants from attaining a leadingposition in the market. With both major technologyplayers and start-ups launching independentefforts in the AI hardware space now, the windowof opportunity for staking a claim will rapidlyshrink over the next few years. To establish a strongstrategy now, they should focus on three questions: Where to play? The first step to creating afocused strategy involves identifying the targetindustry microverticals and AI use cases. At themost basic level, this involves estimating thesize of the opportunity within different verticals,Artificial-intelligence hardware: New opportunities for semiconductor companies11

as well as the particular pain points that AIsolutions could eliminate. On the technical side,companies should decide if they want to focus onhardware for data centers or the edge. How to play? When bringing a new solutionto market, semiconductor companies shouldadopt a partnership mind-set, since they mightgain a competitive edge by collaborating withestablished players within specific industries.They should also determine what organizationalstructure will work best for their business. Insome cases, they might w

Hardware Head node Accelerator Integrated solutions that include training data, models, hardware, and other components (eg, voice-recognition systems) Data presented to AI systems for analysis Techniques for optimizing weights given to model inputs Structured approach to extract features from data (eg, convolutional or recurrent neural networks)

Related Documents:

Artificial Intelligence -a brief introduction Project Management and Artificial Intelligence -Beyond human imagination! November 2018 7 Artificial Intelligence Applications Artificial Intelligence is the ability of a system to perform tasks through intelligent deduction, when provided with an abstract set of information.

and artificial intelligence expert, joined Ernst & Young as the person in charge of its global innovative artificial intelligence team. In recent years, many countries have been competing to carry out research and application of artificial intelli-gence, and the call for he use of artificial

Peter Norvig Prentice Hall, 2003 This is the book that ties in most closely with the module Artificial Intelligence (2nd ed.) Elaine Rich & Kevin Knight McGraw Hill, 1991 Quite old now, but still a good second book Artificial Intelligence: A New Synthesis Nils Nilsson Morgan Kaufmann, 1998 A good modern book Artificial Intelligence (3rd ed.) Patrick Winston Addison Wesley, 1992 A classic, but .

Artificial Intelligence (AI) will be considered – AI is interdisciplinary ! Foundational Topics to Covered – Intelligent Agents . Artificial Intelligence: A New Synthesis. Morgan Kaufmann, 1998 – B. Coppin. Artificial Intelligence Illuminated. Jones and Bartlett, 2004

BCS Foundation Certificate in Artificial Intelligence V1.1 Oct 2020 Syllabus Learning Objectives 1. Ethical and Sustainable Human and Artificial Intelligence (20%) Candidates will be able to: 1.1. Recall the general definition of Human and Artificial Intelligence (AI). 1.1.1. Describe the concept of intelligent agents. 1.1.2. Describe a modern .

IN ARTIFICIAL INTELLIGENCE Stuart Russell and Peter Norvig, Editors FORSYTH & PONCE Computer Vision: A Modern Approach GRAHAM ANSI Common Lisp JURAFSKY & MARTIN Speech and Language Processing, 2nd ed. NEAPOLITAN Learning Bayesian Networks RUSSELL & NORVIG Artificial Intelligence: A Modern Approach, 3rd ed. Artificial Intelligence A Modern Approach Third Edition Stuart J. Russell and Peter .

BCS Essentials Certificate in Artificial Intelligence Syllabus V1.0 BCS 2018 Page 10 of 16 Recommended Reading List Artificial Intelligence and Consciousness Title Artificial Intelligence, A Modern Approach, 3rd Edition Author Stuart Russell and Peter Norvig, Publication Date 2016, ISBN 10 1292153962

PA R T 1 Introduction to Artificial Intelligence 1 Chapter 1 A Brief History of Artificial Intelligence 3 1.1 Introduction 3 1.2 What Is Artificial Intelligence? 4 1.3 Strong Methods and Weak Methods 5 1.4 From Aristotle to Babbage 6 1.5 Alan Turing and the 1950s 7 1.6 The 1960s to the 1990s 9 1.7 Philosophy 10 1.8 Linguistics 11