Autonomous Vehicle Navigation In Rural Environments Without Detailed .

1y ago
13 Views
2 Downloads
562.41 KB
8 Pages
Last View : 13d ago
Last Download : 3m ago
Upload by : Kaydence Vann
Transcription

Autonomous Vehicle Navigation in Rural Environments without DetailedPrior MapsTeddy Ort1 , Liam Paull1,2 , Daniela Rus1Abstract— State-of-the-art autonomous driving systems relyheavily on detailed and highly accurate prior maps. However,outside of small urban areas, it is very challenging to build,store, and transmit detailed maps since the spatial scales are solarge. Furthermore, maintaining detailed maps of large ruralareas can be impracticable due to the rapid rate at which theseenvironments can change. This is a significant limitation forthe widespread applicability of autonomous driving technology,which has the potential for an incredibly positive societalimpact. In this paper, we address the problem of autonomousnavigation in rural environments through a novel maplessdriving framework that combines sparse topological maps forglobal navigation with a sensor-based perception system forlocal navigation. First, a local navigation goal within the sensorview of the vehicle is chosen as a waypoint leading towardsthe global goal. Next, the local perception system generates afeasible trajectory in the vehicle frame to reach the waypointwhile abiding by the rules of the road for the segment beingtraversed. These trajectories are updated to remain in thelocal frame using the vehicle’s odometry and the associateduncertainty based on the least-squares residual and a recursivefiltering approach, which allows the vehicle to navigate roadnetworks reliably, and at high speed, without detailed priormaps. We demonstrate the performance of the system on afull-scale autonomous vehicle navigating in a challenging ruralenvironment and benchmark the system on a large amount ofcollected data.I. I NTRODUCTIONAutonomous driving has the potential to drastically improve our lives. To date, the vast majority of fielded autonomous vehicles focus on one of two scenarios:1) Lane following on well-marked structured highways2) Urban navigation based on extremely precise and manually annotated detailed global mapsHowever, over a third of the roads in the United States areunpaved [1], and 65% do not possess reliable lane markings.These rural roads are challenging for autonomous drivingbecause they are sparsely connected and cover vast areas.Thus, while the detailed global mapping approach becomesimpractical as the maps grow prohibitively large, the lanefollowing approach is also infeasible since lane markings androad curb geometry are frequently unavailable for reliableroad lane following.In this work, we build an efficient framework for autonomous driving on rural roads that combines the crowdsourced topological map, open street map (OSM), witha local perception system for navigating individual roadThis work was partially supported by the Toyota Research Institute (TRI).However, this article solely reflects the opinions and conclusions of itsauthors and not TRI or any other Toyota entity.1 Computer Science and Artificial Intelligence Lab, Massachusetts Institute of Technology {teddy,rus}@mit.edu2 Département d’informatique et de recherche opérationelle, Universitéde Montréal {paulll}@iro.umontreal.caFig. 1: Mapless Navigation using Sparse Topological Maps.Top: The crowd-sourced topological map from openstreetmap.orgis shown as red segments connecting yellow vertices. The pointcloud obtained from a Velodyne HDL-64 laser scanner shows thearea of the global map visible to the vehicle’s sensors. Bottom:The full end-to-end mapless autonomous control system has beendemonstrated in a rural setting.segments. These two capabilities (OSM local perception)combine to enable global navigation over vast areas witha manageable amount of required preloaded data (just theOSM). The key insight which enables this lies in the factthat although GPS data is not precise enough for autonomousdriving, it is precise enough to enable topological localization, and consequently can be augmented with localperception to solve the full autonomous navigation problemsince the OSM contains all the rules associated with eachroad segment. For comparison, note as an example Levinsonand Thrun [2] who use a compression method to shrinkthe size of their high-resolution 2D maps. They are able tostore a 20,000 mile map in 200 GB. In contrast, a typicaltopological map of the same area would only require 3.5GB. Given that the US alone contains more than 4,000,000miles of roadways, the storage size of large maps is clearlya significant challenge.

We take a holistic approach in our application of thisframework using one possible method of road segmentationthat is particularly suitable for rural environments. It is basedon robustly tracking road boundaries using a 3D LiDARsensor that is capable of estimating the road surface edgeswithout any assumptions about road markings and only avery loose prior knowledge about the road geometry (that aroad is relatively flat).Our method is very efficient despite the large rate ofdata collection from the sensor since we use the currentroad boundary estimate as a prior for detection at the nexttime step. We fuse the individual road boundary detections, together with the vehicle odometry, in a probabilisticframework. We have tested our algorithm on a full-scaleautonomous Prius [3] in a rural environment. We have alsoevaluated our method offline on datasets collected from ourtest site. The full perception system runs on a standard PC at5Hz and can reliably detect the road up to 35m in advance,meaning that it can enable the car to travel at speeds wellexceeding 30m/s (67mph), and could be much more if themethod was parallelized and implemented on a GPU.In summary, we claim the following contributions: A framework for autonomous driving that relies on atopological rather than metric prior map; An example implementation using LiDAR-based roadsegmentation and temporal road boundary estimationthat is particularly effective in unstructured environments; A demonstration on a full-scale autonomous car in anunstructured environment.The remainder of the paper is structured as follows: InSec. II, we review the related literature, in Sec. III, weintroduce our generalized framework for autonomous drivingbased on topological maps and local perception, in Sec. IV,we describe in detail our specific application of this framework to enable autonomous driving in rural environments, inSec. V, we show results from our experiments, and in Sec.VIwe discuss our conclusions and future work.II. R ELATED W ORKThis work is inspired by previous work in the areas of hybrid metric-topological mapping, as well as local perceptionand localization for autonomous driving. In this section, wewill highlight some of the most related works.A. Hybrid Metric/Topological MappingThe notion of hybrid metric/topological estimation hasbeen an active field of research in the simultaneous localization and mapping (SLAM) community for two decadesor more. For example, seminal works such as [4] and [5]laid the foundation in this field. The fundamental idea hereis to use local perception for small-scale localization andtopological perception for global localization and navigation.In some sense, these works are faced with a larger problemthan what we are attempting to tackle because they attempt tosimultaneously build and localize within both the small-scaleand the topological maps in an online fashion. In this work,we exploit the fact that, in the driving context, there alreadyexists a curated and open topological map that is actuallymuch more expressive than the topological representationsused in these works (such as the Generalized Voronoi Graph)since the edges in OSM are also labeled with importantsemantic information.Many other related works in the SLAM field associate onesubmap with each edge in the topological graph and focus onsolving the relative transformation between the sub-graphs[6] [7]. The challenge here relates to allowing a robot torobustly transition from one submap to another.We embrace a fully relative representation of the localmap. In other words, the frame used for small-scale localization is always attached to the robot (and is updatedeach time the robot moves). Others have embraced this fullyrelative view [8]–[10] for SLAM by parameterizing the robottrajectory in continuous time.In this work, our goal is not to generate maps (in relativeor global frame) but rather to generate feasible trajectorieswithin the map. By explicitly connecting the local relativeperception problem with the task of navigating through theenvironment at that instant, we can derive a much morelightweight representation for the robot state.B. Localization for Autonomous DrivingThe existing state-of-the-art localization and perceptionsystems can be categorized based on whether they requirea prior metric map [2], [11]–[13], and, if so, by the representation of that map. For autonomous mobility-on-demandsystems that are severely restricted in their operational area,maintaining a highly accurate globally referenced metric mapmay be possible, but this approach does not scale well eithertemporally or spatially.An entirely separate approach, more closely related towhat we are proposing here, is to perceive the local environment for autonomous vehicle navigation. This type ofapproach is appealing since it does not require construction,maintenance, and storage of very dense maps. However,these algorithms require a mechanism for accurately detecting the drivable road surface or lanes reliably and thenconnecting this information to a global topological mapfor robust navigation. The vast majority of local perceptionsystems utilizing such approaches rely heavily on roadmarkings [14]–[16], the very predictable geometry of theroad curbs [17], [18], or both [19]. Vision-based approachesto road segmentation not based on road markings includeboth model-based [20] and learning-based [21] methods andcan achieve good performance in ideal conditions, but failunder poor illumination.In Sec. IV we propose a LiDAR-based road segmentationalgorithm that makes minimal assumptions on the lanemarkings or road geometry and describe how this appliesto our mapless navigation framework for rural autonomousdriving.III. M APLESS AUTONOMOUS D RIVING F RAMEWORKA. Framework OverviewIn this section, we describe our general “mapless” autonomous driving framework. By mapless we refer to thefact that, unlike most autonomous driving systems (e.g. [2],

[11]–[13], [22]–[24]), we assume no precise metric highresolution global map. However, we do assume that we haveaccess to a “topological” map of the environment in whichthe vehicle is operating (such as the OSM shown in Fig. 1).This topological graph is represented as G {E, V}, whereeach vertex vi V contains a (possibly imprecise) GPSlocation vi (lati , loni ). Each edge eij E indicates thatthere is a direct feasible path from vertex vi to vertex vj thatdoes not pass through any other vertex. These edges mayalso be weighted according to the traversal cost. We definethe navigation task as finding a feasible trajectory from thecurrent location xstart to the goal location xend also in GPScoordinates.We assume that each edge eij contains a unique set ofrules of the road that should be adhered to in order for safetraversal. For a detailed treatment of how these rules can beencoded please see [25].We decompose this problem into the following sub-tasks:1) Global topological localization 2) Graph search, and 3)Edge traversal. The topological localization and graph searchproblems are relatively well formulated and can be solvedwith commercial GPS/navigation systems. Therefore, weonly give a brief overview of their functionality here.The topological localization problem is that of adding thecurrent location xstart to the topological graph as a vertex,vstart , and connecting it to the graph by finding at least onefeasible path to an existing vertex.The same is done for the end location xend and vertexvend . The graph search procedure produces {vstart .vend } V, a sequence of connected vertices that have the shortestpath. This problem can be solved with standard graph searchalgorithms, for example Djikstra’s algorithm.B. Edge TraversalThis work focuses primarily on the edge traversal sub-taskof the full navigation problem. At this stage, we assume wehave obtained the next edge that must be traversed, eij , alongwith any associated rules-of-the-road from the result of theprevious topological localization and graph search sub-tasks.The goal of edge traversal is to enable the robot to traversethe edge using only local sensor information. An overviewof the procedure is given in Algorithm 1.First, define the local coordinate frame of the robot at timet to be Xt , the set of valid configurations that adhere to therules of the road on edge eij to be Xij , and the sensor swathat time t to be St . Furthermore, for brevity of notation, wewill assume in this section that Xij is 2D, that is, that theroad segment is approximately flat. However, this approachcould easily be extended to 3D if the elevation change in theedge traversal were significant.Next, we find the local goal as the point in Xt that isas close as possible (in the Euclidean sense) to vj , the nextwaypoint in the list of graph vertices:(xgoal , ygoal ) argminT(xt ,yt ) StXtXt Tvj [xt , yt ] 2(1)Xijwhere vj is the Cartesian location of vertex vj in theXt frame.Algorithm 1 Edge Traversal Algorithm: Local Road Perception, Tracking and Following1:2:3:4:5:6:7:8:9:Inputs: next global waypoint stream of local road detection measurements stream of odometry measurementswhile Current state not equal to global waypoint dowhile New sensor input not ready doUpdate reference trajectory estimate fk 1 (α, Θ̄k 1 )where Θ̄t 1 T 1 ΘtGenerate control commands with trajectory following controllerend whileGenerate a new reference trajectory based on thecurrent sensor input fk 1 (α, Θ̂k 1 )Probabilistically fuse reference trajectory estimatesend whileThis formulation accounts for two important possibilitiesthat result from the fact that we are planning in the localframe: 1) The GPS coordinates in the OSM map maybe sufficiently inaccurate that navigating precisely to thosecoordinates would be dangerous and 2) The next vertex tobe attained in the graph may be sufficiently far away that itis outside of the sensor swath of the vehicle. In such cases, itis impossible to generate a feasible reference path from thecurrent vehicle location all the way to the next waypoint. Wesolve both of these problems by projecting the goal waypointonto the feasible space in the sensor swath.The reference trajectory therefore satisfies:τ (α) : [0, 1] 7 Xij(2)with τ (0) [0, 0]T and τ (1) [xgoal , ygoal ]T .Often this trajectory will be internally parameterized byarc length or curvature; however, we will assume that thereis some mapping M to Cartesian coordinates: f (α, Θt )Mτ (α) 7 ft (α, Θt ) x(3)fy (α, Θt )where f (α, Θ) defines the model that is being used torepresent the trajectory and Θ are the model parameters thatare recursively estimated.Furthermore, since the reference trajectory is generated using noisy sensor data, we also obtain στ (α), which containsthe associated uncertainty for each point along α [0, 1].1) When a new odometry measurement arrives: For modeling the motion of the vehicle we use a zero-slip kinematicodometry motion model and assume that we have access toa noisy measurement of linear forward change in position, x x , and a noisy measurement of the change in heading, φ φ . The kinematic model is then given by:xrt 1 xrt ( x x ) cos φrtryt 1 ytr ( x x ) sin φrtφrt 1 φrt ( φ rφ )(4)where X r [xrt , ytr , φrt ]T is the pose of the vehicle at timet. However, in contrast to the standard approach where thismotion model is used to estimate the position of the robot in

some global reference frame, in this case we are re-aligningthe “global frame” with the local frame at every timestep. Inother words, xrt ytr φrt 0, and the motion model canbe significantly simplified:xrt 1 ( x x )ryt 1 0rφt 1 ( φ φ )(5)We update the local reference frame to coincide with thevehicle frame. Define the measurement as ( x , φ ), withassociated uncertainty σ . This is achieved through a standard homogeneous transformation matrix, [x̄t 1 , ȳt 1 , 1]T T 1 [xt , yt , 1]T : where cos( φ ) sin( φ ) x0 (6)T sin( φ ) cos( φ )001Then if we restrict the mapping M to a linear function onthe model parameters of the form f (α, Θ) ΘA(α) thenthe model parameters in the new coordinate frame are givenby:xt 1 T 1 xt T 1 f (α, Θt ) T 1 Θt A (α) Θ̄t 1 A(α)(7)where Θ̄t 1 T 1 Θt and the modifications induced by thechange of reference frame are propagated to the parametersin order to maintain the fixed model form of f .In practice, the frequency of this odometry loop can bemuch faster than the measurement loop described in the nextsection, as shown in Fig. 2.2) When a new sensor measurement arrives: We assumethat the vehicle is equipped with a sensor or suite of sensorsthat can perceive the local environmental geometry and evenpotentially semantics. These could be vision, LiDAR, orradar sensors, or any combination thereof.Each new batch of sensor data (e.g. an image from acamera or a scan from a laser), is fed through a perceptionsystem to generate “feature” detections in the sensor frame(i.e. no registration is necessary). At this stage, the localmotion planning algorithm is run to generate a suitable reference path to xgoal as described above. Since the perceptionsystem is generally imperfect, the uncertainty generated byperception can be propagated onto the trajectory parameters.Consequently, we arrive at a new local reference trajectory,fk 1 (α, Θ̂k 1 ) where once again the predefined model hasbeen fit and the trajectory uncertainty has been propagated tothe parameters. Finally, since the trajectory from odometrypropagation fk 1 (α, Θ̄k 1 ) and the new trajectory generatedfrom the sensor measurement fk 1 (α, Θ̂k 1 ) are in the samelocal reference frame, they can be probabilistically fused.IV. A PPLICATION : E DGE T RAVERSAL ON RURAL ROADS EGMENTSAn instance of the above framework was used to navigateautonomously on rural road segments as outlined in Fig. 2.Fig. 2: An overview of the mapless driving algorithm showingthe three levels of hierarchy: Level 1 incorporates coarse GPSmeasurements for global navigation at 1Hz. Level 2 uses sensormeasurements to generate a trajectory towards a local waypoint at10Hz. Level 3 performs low-level control of the vehicle speed andsteering at 100Hz.In this application, OSM is used for mapping and theapproximate location on the OSM is obtained using GPS.The sensor used for planning safe trajectories in the localframe is a Velodyne HDL-64 laser scanner. Each revolutionof the sensor generates a pointcloud. A road segmentationalgorithm was developed to obtain the road boundary pointsin the sensor swath of the vehicle. These points were then fitusing a RANSAC/least-squares approach to obtain an optimal trajectory within the road boundary points. Importantly,the quality of the fit was also obtained from the residualsoutput from the least-squares minimization. This allows thetrajectory estimates to be probabilistically fused during thenext iteration. The vehicle is also equipped with odometrysensors used to propagate the previous trajectory estimatesto the local vehicle frame after the vehicle has moved. Thus,the reference trajectory is always in the vehicle local frame.Since the odometry is typically updated much faster than thesensor measurements arrive (in this case 100Hz), the vehiclecan continue to follow the reference trajectory for sometime even while no new measurements have been received.Finally, the reference trajectory is used to steer the vehicletowards the local goal for edge traversal (xgoal , ygoal ) whichis within the sensor swath. As the sensor reveals new areasof the map, the local goal will move closer to the global goalvend as described in Sec. III.A. PreliminariesAs is common in autonomous vehicle navigaition, we represent the reference trajectory as a clothoid [26]. Specifically,we choose to model the road as a clothoid of degree onewhereby the curvature of the road varies linearly. We canrecover the heading direction along the path by integrating

φ0ylB. Trajectory Propagation from OdometryWhen new vehicle odometry arrives (at high frequency),the spline estimated is updated based on the information fromthe onboard sensors of the vehicle. The process model isgiven bysrXt 1 f (Xts , Xt 1, t )(12)x0y0Fig. 3: Clothoid geometrythe curvature along the path:ZlC(t)dtφ(l) φ0 0l2 φ0 lC C 12(8)0Referring to Fig. 3, based on differential geometry we canreparameterize the clothoid in Cartesian coordinates [26]:Zcos φ(t)dt0Z l(9)sXt 1sin φ(t)dt0Collectively (8) and (9) combine to define the mappingM as defined in 3.By assuming that: (a) the origin of the curve in Cartesiancoordinates is in line with the vehicle (x0 0) and (b)that the change in heading over the course of the curveis relatively small (cos φ(t) 1, sin φ(t) φ(t) t1 ), wecan arrive at the following spline representation in Cartesiancoordinates:y y0 φ0 x C0 2 C1 3x x ,26(10)which corresponds to selecting f (α, Θ) according to 0f (α, Θ) y01xgoalφ0 xgoal0y y0 (φ0 φ φ )(x x x )(13)C0C1 (x x x )2 (x x x )326which is solved and then the coefficients are collected relativeto the appropriate terms to yield the noiseless process model:lx x0 y y0 1where t [ x , φ , C , C ]T are all the additive zero-meanGaussian sources of noise in the process model, and f isderived by substituting the updated vehicle estimate fromodometry (5) into the spline function (10), and, differentlyfrom [20], we also incorporate the change in heading resulting from the odometry, which results in an additional offsetadded to the φ0 parameter:0C0 2x2 goal0 1 0C1 3 α 2 xgoal α2 0α3(11)as described in Sec. III-B.1.The four-dimensional state that we will use to estimate theallowableT configurations within the sensor swath will thenbe: St Xij {y0 , φ0 , C 0 , C 1 }. Note that this region is themaximum drivable region in the current sensor swath andrules of the road constraints in Xij could be used to furtherrestrict the allowable trajectories to a particular portion ofthe road. In this application, since the roads are in a testfacility without oncoming traffic, the trajectory was chosento follow the center of the road.1 as is noted in [26] this second assumption can be explicitly enforced bytruncating the curve at such a point as this approximation error grows toolarge - typically φ(t) 15o is used as a threshold 1 0 00 x1001 22 x x10 1 36 x1 2 2 x X s x t1 φ x φ 0 0(14)Note that this process model is linear with respect to thestate but not with respect to the noise parameters. Therefore fthe noise Jacobians, W are calculated at each timetstep in the prediction phase: 0 00 0 1 0 0 1(15)and the final additive noise term in the filter update isgiven by W T QW , where Q is a diagonal matrix of noisecovariances for each of the sources of noise. φ0t x Ct0 21 2x Ct1 φ Ct0W Ct10 x x Ct100C. Single Scan Road Boundary DetectionIn this section, we describe our method for robustlydetecting road regions in rural areas from LiDAR data. Thekey insight in this method is that we can detect the roadarea in each individual laser scan by thresholding in thefrequency domain. The road has a much smoother texturerelative to surrounding areas, the points in each ring exhibitless variation in their position compared to the points lyingin the grass or trees on the side as shown in Fig. 5. Tocharacterize the texture of a region, the distance from thevehicle at each point is calculated where for the ith pointxi the distance is di kxi k. Next, the distance signal isprocessed by taking the absolute value of the first derivativeof the signal in order to remove the mean and amplify thenoise. The resulting signal is a measure of the surface texturewhere larger values indicate rougher surfaces. The top graphin Fig. 4 shows the raw signal representing a single ring in alaser scan while the processed signal is shown in the bottom.

Fig. 4: A single ring from the pointcloud viewed as a 1 dimensional signal the road texture is clearly distinguishable from thesurroundings. The top graph shows the raw signal and the bottomshows the signal after post-processing (see IV-C). The vertical redlines indicate the location marked as the road boundary in front ofthe vehicle.The vertical red lines show the location of the actual roadedges for the road in front of the vehicle.Finally, the processed signal is searched for the road edgesby first finding the standard deviation of the signal in theroad area and then returning the first point that is larger thann standard deviations where n is a tunable parameter usedto choose the sensitivity of the edge detection. In practice,this value could remain constant over a variety of roads andweather conditions.Once the edges in a single ring are found, the algorithmmoves on to the next ring. In each case, the initial guess forthe location of the edge is chosen as the point that has theshortest Euclidean distance to the one in the prior ring. Inthis way, the edge detection moves progressively along theedges of the road without the need to explore the regions ofthe point cloud that are far from the road. Fig. 5 shows theresult of running the edgepoint detection algorithm on thepointclouds.D. Trajectory Update from New Local EstimateWe use RANSAC to fit the boundary points to a spline(one for each road boundary). The reference spline trajectoryis then generated as the average between the two boundarysplines as shown in blue in Fig. 5. The result of each singlescan curve fit is treated as a new incoming measurement.After performing RANSAC, the inliers are fit to the splineusing a least-squares fitting procedure, and the residual ofthis fitting procedure is used to scale the covariance of themeasurement update.zt 1 2 1 3 T2x , 6x ]HXts ηt(16)where H [x,and each data point used to fitthe curve can be treated as an independent measurement. Forpoint i:ηti zti H i Xts 2(17)Fig. 5: Top: We use a texture based approach to detect the roadboundaries (blue spheres) from single laser scans. Bottom: Theresult from a RANSAC implementation used to fit a spline to theedge points detected at the road boundaries. Note that the emptyarea in the center is the location of the vehicle, which occludes thesensor.Thus, the detected road is used to generate a feasibledriving trajectory along with a measure for the certainty thatthe trajectory is correct.E. Trajectory Following ControllerThe final step to enable autonomous navigation is thetrajectory following controller. This lies between the outputof the road boundary segmentation and the low-level vehiclesteering controller as seen in Fig. 2. The trajectory followertakes as input a parameterized trajectory spline along theroad and determines the steering angle the vehicle shouldtake to follow that trajectory. Since the trajectory generationis happening in a frame always coincident with the vehicle,we use a moving goal-point approach where the vehicle isalways following a point at a fixed distance, δ, on the inputtrajectory. To calculate the position and orientation errorsat δ, first the spline model in (10) is evaluated to get theposition error y0 y(δ), then the derivative of that modelis used to calculate the angle error.dyC1 2 φ0 C 0 x xdt2 dy φ0 tan 1dt x δ(18)Finally, these angle errors are passed to the position andorientation PID controllers, which were tuned to drive theseerrors to 0.

V. R ESULTSA. Experimental SetupThe vehicle used for testing and data collection was a2015 Toyota Prius V retrofitted for autonomous driving(see [3] for details). A Velodyne HDL-64S3 LiDAR sensorwas used for perception, while a Microstrain 3DM-GX4inertial measurement unit (IMU) and external wheel encodersprovided onboard odometry measurements. Since the roadsegmentation is able to perceive the road up to 35m ahead,and the vehicle reached maximum speeds of 10 ms , at 5hz,the frequency of the road segmentation was fast enough toensure that the vehicle traveled at most 2m or 5% alongthe known road spline before it was ready to incorporate newsensor data.B. Localization AccuracyIn order to evaluate the accuracy of the local trajectoriesgenerated by the local perception system, the parameters X̂tsand the uncertainty Pt at each time step t were recorded.Furthermore, the raw parameters estimated directly fromthe model-fitting step without temporal filtering were alsosaved. Finally, the vehicle was manually driven down thecenter of the road and the path taken was recorded usingthe odometry measurements. These measurements served asa ground truth allowing us to compare the path predicted bythe segmentation process with the path the vehicle actuallytook. The root mean squared distance (RMSD) was evaluatedfor each point along the vehicle path, as compared to thetrue road center to determine how well the predicted pathmatched. These measurements allowed us to compare boththe unfiltered and filtered road parameters as estimated bythe segmentation system for evaluation. In the top of Fig. 6,we show these road spline estimates overlaid on a map ofthe test site and the corresponding deviation from the roadcenter is shown in the bottom. Notice that while the unfilteredparameters occasionally show spikes in the deviation, thefiltered estimate shown in red tends to attenuate those spikesdue to the robustness gained by fusing the new measurementswith the prior estimate.To better quantify the overall quality of the results and theeffect of the filter, refer to Fig. 7. The histogram shows thatfor both methods more than 80% of the pre

navigation in rural environments through a novel mapless driving framework that combines sparse topological maps for global navigation with a sensor-based perception system for local navigation. First, a local navigation goal within the sensor view of the vehicle is chosen as a waypoint leading towards the global goal.

Related Documents:

the autonomous navigation of these systems. The global positioning system (GPS) is used for external autonomous navigation [1]. Because GPS signals are typically absent or weak indoors, autonomous navigation is difficult [2]. There are various approaches for independent indoor navigation which have been proposed in recent years.

programming interfaces and utilizes JAUS-compliant perception and navigation components. REV Autonomous Navigation System The REV is equipped with sensors and algorithms for autonomous navigation, in addition to the teleoperation and semi-autonomous controls. There are two main autonomous navigation modes. Table 1: REV vehicle specifications

Page 2 Autonomous Systems Working Group Charter n Autonomous systems are here today.How do we envision autonomous systems of the future? n Our purpose is to explore the 'what ifs' of future autonomous systems'. - 10 years from now what are the emerging applications / autonomous platform of interest? - What are common needs/requirements across different autonomous

predict the steering wheel angle to navigate the autonomous vehicle safely. So that the autonomous vehicle is depended on the training dataset. If the CNN model is not trained on the roadway obstacle than navigation system of autonomous vehicle may generate incorrect information about the steer

Navigation Systems 13.1 Introduction 13.2 Coordinate Frames 13.3 Categories of Navigation 13.4 Dead Reckoning 13.5 Radio Navigation 13.6 Celestial Navigation 13.7 Map-Matching Navigation 13.8 Navigation Software 13.9 Design Trade-Offs 13.1 Introduction Navigation is the determination of the position and velocity of the mass center of a moving .

Florida Statutes - Autonomous Vehicles (2016) HB 7027, signed April 4. th. 2016 - updates: F.S. 316.85 -Autonomous Vehicles; Operation (1) "A person who possesses a valid driver license may operate an autonomous vehicle in autonomous mode on roads in this state if the vehicle is equipped with autonomous technology, as defined in s. 316. .

NAVIGATION SYSTEM RNAV2 Precision Navigation System (p/n 4600-101) is an innovative electronic navigation system that can be either mounted in the DPD to enable precision navigation by combat divers, or without divers for Autonomous Unmanned Vehicle (AUV) missions. Additionally, the RNAV2

The Audit and Accounting Thresholds . AAT is a registered charity. No. 1050724. 3. Accounting Threshold The . regulations apply in respect of financial years beginning on or after 1 January 2016 whereby the audit threshold and the accounting threshold have become the same for private limited companies. The requirements for a private limited company that is also a charity are different. Please .