An End-to-End Deep Neural Network For Autonomous Driving .

2y ago
28 Views
2 Downloads
7.34 MB
26 Pages
Last View : 1m ago
Last Download : 3m ago
Upload by : Harley Spears
Transcription

sensorsArticleAn End-to-End Deep Neural Network forAutonomous Driving Designed for EmbeddedAutomotive PlatformsJelena Kocić * , Nenad Jovičić and Vujo DrndarevićSchool of Electrical Engineering, University of Belgrade, 11120 Belgrade, Serbia; nenad@etf.rs (N.J.);vujo@etf.rs (V.D.)* Correspondence: cvetakg@gmail.com; Tel.: 381-64-9258-651Received: 15 March 2019; Accepted: 30 April 2019; Published: 3 May 2019 Abstract: In this paper, one solution for an end-to-end deep neural network for autonomous drivingis presented. The main objective of our work was to achieve autonomous driving with a light deepneural network suitable for deployment on embedded automotive platforms. There are severalend-to-end deep neural networks used for autonomous driving, where the input to the machinelearning algorithm are camera images and the output is the steering angle prediction, but thoseconvolutional neural networks are significantly more complex than the network architecture weare proposing. The network architecture, computational complexity, and performance evaluationduring autonomous driving using our network are compared with two other convolutional neuralnetworks that we re-implemented with the aim to have an objective evaluation of the proposednetwork. The trained model of the proposed network is four times smaller than the PilotNet modeland about 250 times smaller than AlexNet model. While complexity and size of the novel network arereduced in comparison to other models, which leads to lower latency and higher frame rate duringinference, our network maintained the performance, achieving successful autonomous driving withsimilar efficiency compared to autonomous driving using two other models. Moreover, the proposeddeep neural network downsized the needs for real-time inference hardware in terms of computationalpower, cost, and size.Keywords: autonomous driving; camera; convolutional neural network; deep neural network;embedded systems; end-to-end learning; machine learning1. IntroductionResearch and development in the field of machine learning and more precisely deep learninglead to many discoveries and practical applications in different domains. The domain where machinelearning has a huge impact is the automotive industry and the development of fully autonomousvehicles. Machine learning solutions are used in several autonomous vehicle subsystems, as theperception, sensor fusion, simultaneous localization and mapping, and path planning. In parallel withwork on full autonomy of commercial vehicles, the development of various automotive platforms isthe current trend. For example, delivery vehicles or various different robots and robot-cars are usedin warehouses. The main idea of our work was to develop a solution for autonomous driving for alight automotive platform that has limited hardware resources, processor power, and memory size.Having those hardware restrictions in mind, we are aiming to design a light deep neural network(DNN), an end-to-end neural network that will be able to perform the task of autonomous driving onthe representative track, while the developed networks’ model used for inference is possible to deployon a the low-performance hardware platform.Sensors 2019, 19, 2064; doi:10.3390/s19092064www.mdpi.com/journal/sensors

Sensors 2018, 18, x FOR PEER REVIEWSensors 2019, 19, 2064Sensors 2018, 18, x FOR PEER REVIEW2 of 272 of 262 of 27The autonomous driving system can be generally divided into four blocks: sensors, perceptionsubsystem,planning subsystem,and controlthe vehicle,Figure1. fourThesensingperceptionthe worldgenerallydividedintofourvehicleblocks:isThe autonomousdriving systemcan be thesensorsisprocessedsubsystem, planningplanning subsystem,subsystem, andand controlcontrol ofof thethe vehicle,vehicle, FigureFigure 1.1. The vehicle is sensing the worldsubsystem,ina nesensordata into hicle.Theinformationfromsensorsis processedusing many different sensors mounted on the vehicle. The information fromthethesensorsis processedinplanningsubsystemusesthe outputfrom theperception blockforbehaviorplanningand for botha nentscombinesensor sensordata intomeaningfulinformation.The planningshortandsubsystemlong-rangeusespath planning.The controlmodule blockensuresthatthe vehiclefollowstheplanningoutputfromthe perceptionforplanningbehaviorplanningandfor pathbothsubsystemuses the outputthefromthe perceptionblock for behaviorandfor controlcommandstothevehicle.short- and the vehiclefollowsthe pathlong-rangepath planning.control Themoduleensuresthatensuresthe vehiclethe pathprovidedbyprovidedby theplanningsubsystemand sendscontroltocommandstheplanningsubsystemandsends controlcommandsthe vehicle.to the vehicle.Figure 1. Autonomous vehicle system block diagram.Figure 1. Autonomous vehicle system block diagram.An end-to-end deep neural network we designed for autonomous driving uses camera imagesas anAninput,which isa rawsignal(i.e., pixel),and steeringangle eringanglepredictionsasanoutputtocontroltheas an input, which is a raw signal (i.e., pixel), and steering angle predictions as an output to controlbeginningto ementin theprocess.Thefrompurposevehicle,Figure2. orksthe beginningtothe vehicle,Figure2.End-to-endthe d-to-endbeginning to the end without human interaction or involvement in the training process. The purposenecessarysteps,such theas detectionof internaluseful roadcharacteristics,basedonly onprocessingtheofinputlearningisprocessingthat learningthe systemlearnsrepresentationsof thenecessaryof end-to-endis automaticallythatsystemautomaticallylearns tsignal.necessary processing steps, such as detection of useful road characteristics, based only on the inputsignal.Figure 2. Block diagram of an end-to-end autonomous driving system.Figure 2. Block diagram of an end-to-end autonomous driving system.NowadaysNowadays thethe machinemachine learninglearning applicationsapplications havehave beenbeen increasinglyincreasingly deployeddeployed toto theInternetof Things(IoT)solutions.Deploymentof ions.DeploymentofmachinelearningNowadays the machine learning applications have been increasingly deployed to o embeddedhardwareplatformsleadsnew developmentsin machinetwo directions:devices, mobilephones, andthe Internetof Things(IoT)tosolutions.Deployment e ableto processdata neededfor ions to embedded hardware platforms leads to new developments in two directions:anddevelopmentof novel light machinelearningarchitectureslearningand ntnovellightarchitecturesand learningmodeldevelopmentof novelhardwareof platformsable machineto process data neededfor itableforlow-performinghardware.inference, and development of novel light machine learning architectures and modelThesolutionsend-to-endforThe knownknown suitablesolutionsforforforend-to-end learninglearningfor autonomousautonomous drivingdriving [1–5][1–5] areare nthe realsolutionsvehicles, wherethe machinelearningused fordrivinginferenceis deployedon thetheThefor end-to-endlearningfor modelautonomous[1–5]are ofthevehicle,orthosesolutionsmostly for the real vehicles, where the machine learning model used for inference is deployed on theuseveryneural networksnetworks thatthat areare computationallycomputationally expensive (e.g.,(e.g., using ResNet50ResNet50 architecturearchitectureusevery deepdeep neuralhigh-performancecomputer, whichis usually located inexpensivethe trunk of theusingvehicle, or olution,alightdeepneuraluse very deep neural networks that are computationally expensive (e.g., using ResNet50 tonomousdrivingas knownsolutions,but usinga utonomousdrivingasknownsolutions,butin) [3]. However, our idea was to develop a significantly smaller solution, a light deep neuralcomputationalcost that willdeploymenton an embeddedplatform.This platform.lighter solutionwillbesmallercostenablethat willenableautonomousdeploymenton anembeddedThislighternetwork,computationalwith similar performanceduringdrivingas known solutions,but ll be used forrobot-cars,automotiveable platform.to carry thegoodsorsmaller computationalcostthat will embeddedenable deploymenton platformsan embeddedThislightertasksamong ativelyknowntrajectories.solution somewill beused tasksfor robot-cars,embeddedautomotiveplatforms able to carry the goods orperform some similar tasks among relatively known trajectories.

Sensors 2019, 19, 20643 of 26In this paper, we present a novel deep neural network developed for the purpose of end-to-endlearning for autonomous driving, called J-Net, which is designed for embedded automotive platforms.In addition to this, for the purpose of objective evaluation of J-Net, we discuss results of ourre-implementations of PilotNet [1,2] and AlexNet [6]. AlexNet is originally developed for objectclassification, but after our modifications, the new AlexNet-like model is suitable for autonomousdriving. Firstly, the novel deep neural network architecture J-Net is developed, the model is trainedand used for inference during autonomous driving. Secondly, we have compared architectures of threenetwork architectures, J-Net, PitotNet, and AlexNet, and discuss computational complexity. This is donefrom the reason to have an objective assessment of our network architecture. Next, the implementedmodels have been trained with the same dataset that we collected. The data collection and inferenceare done in the self-driving car simulator designed in Unity environment by Udacity, Inc. [7].Finally, the trained models have been used for the real-time autonomous driving in a simulatorenvironment. The results of autonomous driving using each of these three deep neural networkmodels have been presented and compared. Results of the autonomous driving are given as videorecordings of the autonomous driving in a representative track in a simulator environment, in additionto the qualitative and quantitative performance evaluation of autonomous driving parametersduring inference.The results show that while the complexity and size of novel network are smaller in comparisonwith other models, the J-Net maintained the performance, achieving similar efficiency in autonomousdriving. Aiming for implementation on the embedded automotive platform amplifies the importanceof a computationally light solution for the DNN used for autonomous driving, since embeddedsystems may suffer from hardware limitations for onboard computers that are not capable of runningstate-of-the-art deep learning models.In the next section, we will present the related work. The autonomous driving system used fordata collection and autonomous driving is described in section III. Dataset and data collection strategyare given in section IV. Proposed approach, architecture, and implementation details of J-Net are givenin section V. The implementation of models, comparison of network architectures and parameters,and the training strategy for all three solutions are presented in section VI. Results and discussion ofthe implementation of all three neural networks and inference during autonomous driving are givenin section VII. The conclusion is given in the last section.2. Related WorkDeep learning is a machine learning paradigm, a part of a broader family of machine learningmethods based on learning data representations [8–11]. Representations in one layer of a deepneural network are expressed in terms of other, simpler representations from previous layers of thedeep neural network. The core gradient of deep neural networks are convolutional networks [12].Convolutional neural networks (CNN) are a specialized kind of neural network for processing datathat has a known grid-like topology. CNNs combine three architectural ideas: local representativefields, shared weights, and spatial or temporal sub-sampling, which leads to some degree of shift, scale,and distortion invariance. Convolutional neural networks are designed to process data with multiplearrays (e.g., color image, language, audio spectrogram, and video), and benefit from the properties ofsuch signals: local connections, shared weights, pooling, and the use of many layers. For that reason,CNNs are most commonly applied to analyzing visual imagery.Deep learning for computer vision has significant usage in various commercial and industrialsystems and products, as automotive, security and surveillance, augmented reality, smart homeapplications, retail automation, healthcare, and the game industry; Figure 3. Convolutional neuralnetworks were some of the first deep models to perform well and were some of the first networksto solve important commercial applications. One of the first examples of convolutional neuralnetwork practical application was in the 1990s by research group at AT&T, Inc. that uses CNN foroptical character recognition and reading checks [13]. Later, several optical character recognition

Sensors 2019, 19, 20644 of 26and handwriting recognition solutions were developed based on convolutional neural networks [14],whileapplicationsSensors the2018,newest18, x FORPEER REVIEWof CNNs for computer vision are endless [15–18].4 of 27Figure3. GeneralGeneral deepdeep learninglearning developmentdevelopment flow.flow. Training—onceTraining—once the convolutional neural networkFigure aset.dataset.Inference—thetrained modelmodel is developed it is trained with the appropriateInference—thetrainedis deployedmodel nputdata.deployed at the end product and used for inference with real-time input data.Significantto theSignificant contributioncontribution tothe developmentdevelopment ofof convolutionconvolution neuralneural networksnetworks andand deepdeep ures is given by ImageNet Large Scale Visual Recognition Challenge [19]. Over severalseveralyears,years,the architecturesthatwonthis competitionrepresentthe state-of-the-artneural networksthe architecturesthat wonthiscompetitionrepresentthe state-of-the-artof neural ofnetworksand deepanddeep becominglearning, becomingbuildingand inspirationnew solutions.of themostlearning,a buildinga blockandblockinspirationfor new forsolutions.Some of Somethe mostknownknownarchitecturesare AlexNetdesignedby the SuperVisiongroupUniversityfrom Universityof Toronto[6];architecturesare AlexNetdesignedby the SuperVisiongroup fromof Toronto[6]; omUniversityofOxford[20];GoogLeNet16 model designed by VGG (Visual Geometry Group) from University of Oxford [20]; GoogLeNetdesigneddesigned byby researchesresearches fromfrom Google,Google, Inc.Inc. [21][21] thatthat introducedintroduced inceptioninception modules;modules; ResidualResidual omMicrosoftResearch[22],withthedepthNetwork (ResNet) designed by researchers from Microsoft Research [22], with the depth ofof eveneven 152152layers;and ReNetReNet designeddesigned byby researchesresearches fromfrom PolitecnicoPolitecnico didi MilanoMilano andand UniversityUniversity ofof MontrealMontreal [23].[23].layers; andSomeSome ofof thethe novelnovel breakthroughsbreakthroughs inin deepdeep learninglearning areare automatedautomated machinemachine learninglearning [24],[24], video-to-videosynthesis[26],playingthegameof GoGo [27,28],[27,28],deep networks with synthetic data [25], video-to-video synthesis [26], playing the game ofandend-to-endlearning[29–31].and end-to-end learning [29–31].Theof thethe developmentof autonomousautonomous vehiclesvehicles startedstarted inin thethe 1950s.1950s.The firstfirst successfulsuccessful attemptsattempts ofdevelopment 84[32,33],andin1987[34].SignificantThe first fully autonomous vehicles were developed in 1984 [32,33], and in 1987 [34]. Significantbreakthroughin thebreakthrough inthe fieldfield ofof autonomousautonomous vehiclesvehicles isis donedone duringduring thethe DefenseDefense AdvancedAdvanced ,GrandChallengeeventsin2004and2005Projects Agency’s (DARPA) challenge, Grand Challenge events in 2004 and 2005 [35,36],[35,36], andand UrbanUrbanChallengein 2007Challenge in2007 [37],[37], wherewhere itit waswas demonstrateddemonstrated thatthat machinesmachines couldcould independentlyindependently performperform thethecomplexhumantaskofdriving.complex human task of driving.Althoughare thethe prototypesprototypes ofof autonomousautonomous vehiclesvehicles currentlycurrently testedtested onon thethe regularregular streets,streets,Although therethere tcompletelysolvedyet.Currentchallengesinsome of the challenges for the autonomous driving are not completely solved yet. Current ��46],in ementlearningforautonomous46], an end-to-end learning for autonomous driving [1–5,47–49], reinforcement learning fordriving[5,50–53],and[5,50–53],human machineinteraction[54,55].A systematicofcomparisondeep learningautonomousdrivingand humanmachineinteraction[54,55]. comparisonA isgivenin[56],ashortoverviewofsensorsandsensordeep learning architectures used for autonomous vehicles is given in [56], a short overview of sensorsfusionin autonomousvehicles is presented[57].and sensorfusion in autonomousvehicles isinpresentedin ngusingonly onlycameraimagesas an input.The aim of our work was

Sensors 2019, 19, 2064 3 of 26 In this paper, we present a novel deep neural network developed for the purpose of end-to-end learning for autonomous driving, called J-Net, which is designed for embedded automotive platforms.

Related Documents:

work/products (Beading, Candles, Carving, Food Products, Soap, Weaving, etc.) ⃝I understand that if my work contains Indigenous visual representation that it is a reflection of the Indigenous culture of my native region. ⃝To the best of my knowledge, my work/products fall within Craft Council standards and expectations with respect to

Little is known about how deep-sea litter is distributed and how it accumulates, and moreover how it affects the deep-sea floor and deep-sea animals. The Japan Agency for Marine-Earth Science and Technology (JAMSTEC) operates many deep-sea observation tools, e.g., manned submersibles, ROVs, AUVs and deep-sea observatory systems.

2.3 Deep Reinforcement Learning: Deep Q-Network 7 that the output computed is consistent with the training labels in the training set for a given image. [1] 2.3 Deep Reinforcement Learning: Deep Q-Network Deep Reinforcement Learning are implementations of Reinforcement Learning methods that use Deep Neural Networks to calculate the optimal policy.

Z335E ZTrak with Accel Deep 42A Mower Accel Deep 42A Mower 42A Mower top 42A Mower underside The 42 in. (107 cm) Accel Deep (42A) Mower Deck cuts clean and is versatile The 42 in. (107 cm) Accel Deep Mower Deck is a stamped steel, deep, flat top design that delivers excellent cut quality, productivity,

Why Deep? Deep learning is a family of techniques for building and training largeneural networks Why deep and not wide? –Deep sounds better than wide J –While wide is always possible, deep may require fewer nodes to achieve the same result –May be easier to structure with human

-The Past, Present, and Future of Deep Learning -What are Deep Neural Networks? -Diverse Applications of Deep Learning -Deep Learning Frameworks Overview of Execution Environments Parallel and Distributed DNN Training Latest Trends in HPC Technologies Challenges in Exploiting HPC Technologies for Deep Learning

Deep Learning: Top 7 Ways to Get Started with MATLAB Deep Learning with MATLAB: Quick-Start Videos Start Deep Learning Faster Using Transfer Learning Transfer Learning Using AlexNet Introduction to Convolutional Neural Networks Create a Simple Deep Learning Network for Classification Deep Learning for Computer Vision with MATLAB

mental health advice) into the everyday practice of partner organisations. Joseph Rowntree Foundation (JRF) are the core funders, with additional support provided by Glasgow Kelvin College, the NHS, GCPH, the Scottish Government and What Works Scotland. General practitioners at the Deep End (Deep End GPs) The Deep End GPs Group is a collaborative endeavour involving GPs working in the 100 most .