A Navigation System For Robots Operating In Crowded Urban Environments

6m ago
0 Views
0 Downloads
1.96 MB
8 Pages
Last View : 6m ago
Last Download : n/a
Upload by : Jewel Payne
Transcription

A Navigation System for Robots Operatingin Crowded Urban EnvironmentsRainer KümmerleMichael RuhnkeBastian StederCyrill StachnissWolfram BurgardAbstract— Over the past years, there has been a tremendousprogress in the area of robot navigation. Most of the systemsdeveloped thus far, however, are restricted to indoor scenarios,non-urban outdoor environments, or road usage with cars.Urban areas introduce numerous challenges to autonomousmobile robots as they are highly complex and in addition tothat dynamic. In this paper, we present a navigation system forpedestrian-like autonomous navigation with mobile robots incity environments. We describe different components includinga SLAM system for dealing with huge maps of city centers, aplanning approach for inferring feasible paths taking also intoaccount the traversability and type of terrain, and a method foraccurate localization in dynamic environments. The navigationsystem has been implemented and tested in several large-scalefield tests in which the robot Obelix managed to autonomouslynavigate from our university campus over a 3.3 km long routeto the city center of Freiburg.I. I NTRODUCTIONNavigation is a central capability of mobile robots andsubstantial progress has been made in the area of autonomousnavigation over the past years. The majority of navigationsystems developed thus far, however, focuses on navigationin indoor environments, through rough outdoor terrain, orbased on road usage. Only few systems have been designedfor robot navigation in populated urban environments suchas pedestrian zones, for example, the autonomous city explorer [1]. Robots that are able to successfully navigate inurban environments and pedestrian zones have to cope witha series of challenges including complex three-dimensionalsettings and highly dynamic scenes paired with unreliableGPS information.In this paper, we describe a navigation system that enablesmobile robots to autonomously navigate through city-centerscenes. Our system builds upon and extends existing technology for autonomous navigation. In particular, it contains aSLAM system for learning accurate maps of urban city areas,a dedicated map data structure for dealing with large-scalemaps, a variant of Monte-Carlo localization that utilizes thisdata structure, and a dedicated approach for terrain analysisthat deals with vegetation, dynamic objects, and negativeobstacles. We furthermore describe how these individualcomponents are integrated. Additionally, we will present theresult of a large-scale experiment during which the robotObelix traveled autonomously from our university campusto the city center of Freiburg during a busy day in August2012. During that trial, the robot had to master a distance ofThis work has been supported by the EC under FP7-231888-EUROPA.All authors are with the Dept. of Comp. Science, University of Freiburg.500 mFig. 1. Example trajectory traveled by our robot navigating in an urbanenvironment that also includes a pedestrian zone with a large number ofpeople. Map data on the left from OpenStreetMap ( c OpenStreetMapcontributors).over 3km. The trajectory taken by the robot and two picturestaken during its run are depicted in Fig. 1.Thus, the aim of this paper is to not only describe therelevant components but also to highlight the capabilities thatcan be achieved with a system like that. We try to motivateour design decisions, critical aspects, as well as limitationsof the current setup.II. R ELATED W ORKThe problem of autonomous navigation in populated areas has been studied intensively in the past. One of thepioneering systems were the robots RHINO [2] and Minerva [3] which operated as interactive mobile tour-guides incrowded museums. An extension of this tour-guide conceptto interactive multi-robot systems was the RoboX systemdeveloped by Siegwart et al. [4] for the Expo’02 SwissNational Exhibition. Gross et al. [5] installed a robot asa shopping assistant that provided wayfinding assistancein home improvement stores. Although these systems wereable to robustly navigate in heavily crowded environments,they were restricted to two-dimensional representations ofthe environment and assumed that the robots operated in arelatively confined planar area.Relatively few robotic systems have been developed forautonomous navigation in city centers. The concept closestto the one described in this paper probably is the one of

the Munich City Explorer developed by Bauer et al. [1].In contrast to our system, which operates completely autonomously and does not require human intervention, thecity exploration system relies on interaction with humans toget the direction where to move next. The city explorer onlybuilds local maps and does not autonomously plan its pathfrom its position to the overall goal location. A further relatedapproach has been developed in the context of the URUSproject [6], which considered urban navigation but focusedmore on networking and collaborative actions as well as theintegration with surveillance cameras and portable devices.Also, the problem of autonomous navigation with roboticcars has been studied intensively. For example, there hasbeen the DARPA Grand Challenge during which autonomousvehicles showed the ability to navigate successfully overlarge distances through desert areas [7], [8], [9]. Duringthe DARPA urban challenge, several car systems have beenpresented that are able to autonomously navigate through dynamic city street networks with complex car traffic scenariosand under consideration of road traffic navigation rules [10],[11]. Recently, commercial self-driving cars [12] have beendeveloped and legalized to perform autonomous navigationwith cars. In contrast to these methods, which focused oncar navigation, the system described in this paper has beendeveloped to enable mobile robots to perform pedestrian-likeautonomous navigation in urban environments with manytypes of dynamic objects like pedestrians, cyclists, or pets.A long-term experiment about the robustness of an indoor navigation system was recently presented by MarderEppstein et al. [13]. Here, the accurate and efficient obstacledetection operating on the data obtained by tilting a laserrange finder has been realized. In contrast to this system, ourapproach has a component for tracking moving obstacles toexplicitly deal with the dynamic objects in highly populatedenvironments and also includes a terrain analysis componentthat is able to deal with a larger variety of terrain.III. T HE ROBOT O BELIX USED FOR THE E VALUATIONThe robot used to carry out the field experiments is a custom made platform developed within the EC-funded projectEUROPA [14], which is an acronym for the EUropeanRObotic Pedestrian Assistant. The robot has a differentialdrive that allows it to move with a maximum velocity of1 m/s. Using flexibly mounted caster wheels in the front andthe back, the robot is able to climb steps up to a maximumheight of approximately 3 cm. Whereas this is sufficient tonegotiate a lowered pedestrian sidewalk, it has not beendesigned to go up and down normal curbs so that the robotneeds to avoid such larger steps. The footprint of the robotis 0.9 m 0.75 m and the robot is around 1.6 m tall.The main sensor input is provided by laser range finders.Two SICK LMS 151 are mounted horizontally in the frontand in the back of the robot. The horizontal field of view ofthe front laser is restricted to 180 . The remaining beamsare reflected by mirrors to observe the ground surface infront of the robot. Additionally, another scanner is tilted70 downwards to detect obstacles and to identify theFig. 2. One laser is mounted downwards (left) to sense the surface in frontof the robot to decide whether it is safe to navigate over a particular area. Asecond horizontally mounted laser is combined with mirrors, which reflecta portion of its beams towards the ground (right). The data from those twolasers is used to find obstacles that are not visible in the horizontal scans.terrain the robot drives upon. Fig. 2 visualizes the setupof the non-horizontal laser beams. A Hokuyo UTM-30LXmounted on top of the head of the robot is used for mappingand localization, whereas the data of an XSens IMU isintegrated to align the UTM horizontally by controlling aservo accordingly. The robot is furthermore equipped witha Trimple GPS Pathfinder Pro to provide prior informationabout its position during mapping tasks. While the robot alsohas stereo cameras onboard, their data is not used for thedescribed navigation tasks. Rather the images are only usedfor the sake of visualization in this paper.IV. S YSTEM OVERVIEWIn order to autonomously navigate in an environment, oursystem requires to have a map of the area. This might seemlike a huge drawback, but mapping an environment can bedone in a considerably small amount of time. For example,it took us around 3 hours to map a 7.4 km long trajectoryby controlling the robot with a joystick. Furthermore, thisonly has to be done once, as the main structures of anurban area do not change quickly. Small modifications tothe environment, like billboards or shelfs placed in front ofshops, can be handled by our system in a robust manner.In the following, we describe how we obtain the mapof the environment by means of a SLAM algorithm aswell as the most important components of the autonomousnavigation system, such as the algorithms for localization,path planning, and obstacle detection, which enable our robotto operate in large scale city centers. The entire navigationsystem described in this section runs on one standard quadcore i7 laptop operating at 2.3 GHz.A. MappingWe apply a graph-based SLAM formulation to estimate the maximum-likelihood (ML) configuration. Let x (x1 , . . . , xn )T be a vector where each element describesthe pose of the robot at a certain time. zij and Σij arerespectively the mean and the covariance matrix of anobservation describing the motion of the robot between thetime indices i and j, whereas we assume Gaussian noise. Leteij (x) be an error function which computes the differencebetween the observation zij and the expected value given thecurrent state of node i and j. Additionally, let ei (x) be an

PriorRejectedTrajectoryFig. 3. Influence of outliers in the set of prior measurements. Left: Ourmethod rejects prior measurements having a large error. Middle: The mapas it is estimated by taking into account all prior measurements. Right: Ourmethod achieves a good estimate for the map by rejecting priors which arelikely to be outliers.error function which relates the state of node i to its prior zihaving the covariance matrix Σi .Assuming the measurements are independent, we obtainthe ML configuration of the robot’s trajectory asXXx argminkeij (x)kΣij kei (x)kΣi , (1)xij Ci Pdef.where kekΣ eT Σ 1 e computes the Mahalanobis distanceof its argument, and C and P are a set of constraintsand priors. We employ our g2o toolkit [15] for solvingEq. (1), which iteratively linearizes and solves the linearapproximation until a convergence criterion is matched.The laser-based front-end generating the set of constraints C is an extension of the approach proposed byOlson [16]. It applies a correlative scan-matcher to estimatethe motion of the robot between successive time indices.Furthermore, it obtains loop closures by matching the currentscan against all scans which are within the three-sigma uncertainty ellipsoid. It filters false-positives by spectral clustering.The GPS sensor provides the set of priors P. As GPS signalsmay be corrupted by multi-path effects, we apply an outlierrejection method to remove those measurements. Instead ofdirectly solving Eq. (1), we consider a robust cost function– namely the Pseudo Huber cost function [17] – for theprior measurements. After optimization, we remove 2 % ofthe prior edges having the largest residual. We repeat thisprocess five times. Thus, we keep approximately 90 % of theoriginal prior information. Using this approach, some goodGPS measurements might be rejected. However, we foundin our practical experiments that the effect of outliers in theprior measurements may be severe (see Fig. 3). Including theprior information has several advantages. First, it improvesthe accuracy of the obtained maps [18]. Second, if the robotextends its map, coordinates are easy to transform betweendifferent maps, because the maps share a common globalcoordinate frame.B. Map Data StructureObtaining a 2D map given the graph-based SLAM solutionand the laser data is typically done in a straight-forwardmanner, for example, by computing an occupancy grid.However, storing one monolithic occupancy grid for a largescale environment may lead to a large memory footprint.For example, a 2 by 2 km area at a resolution of 0.1 m and4 bytes per cell requires around 1.5 GB of main memory.Instead of computing one large map, we use the informationstored in the graph to render maps locally and close to therobot’s position. A similar approach was recently describedby Konolige et al. [19].We generate the local map as follows. We apply Dijkstra’salgorithm to compute the distance between the nodes in thegraph. This allows us to only consider observations that havebeen obtained by the robot in the local neighborhood of itscurrent location. We compute the set of nodes to be used tobuild the local map asVmap {xi x dijkstra(xi , xbase ) δ} ,(2)where Vmap is the set of observations that will be used forobtaining the local map, xbase the closest node to the robot’scurrent position, dijkstra(xi , xbase ) the distance betweenthe two nodes according to Dijkstra’s algorithm, and δ themaximal allowed distance for a node to be used in the maprendering process. As hard-disk space is rather cheap and itsusage does not affect the performance of other processes, westore each local map on the disk after the first access to itby the system.The localization and path-planning algorithms describedin the following sections all operate on these local maps.The map is expressed in the local frame of xbase and wecurrently use a local map of 40 m 40 m.C. LocalizationTo estimate the pose x of the robot given a map, wemaintain a probability density p(xt z1:t , u0:t 1 ) of thelocation xt of the robot at time t given all observations z1:tand all control inputs u0:t 1 .Our implementation employs a sample-based approachwhich is commonly known as Monte Carlo localization(MCL) [20]. MCL is a variant of particle filtering [21] whereeach particle corresponds to a possible robot pose and hasan assigned weight w[i] . In the prediction step, we draw foreach sample a new particle according to the prediction modelp(xt ut 1 , xt 1 ). Based on the sensor model p(zt xt )each particle within the correction step gets assigned a newweight. To focus the finite number of particles in the regionsof high likelihood, we need to re-sample the particles. Aparticle is drawn with a probability proportional to its weight.However, re-sampling may drop good particles. To this end,the decision when to re-sample is based on the numberof effective particles [22]. Our current implementation uses1,000 particles.A crucial question in the context of localization is thedesign of the observation model that calculates the likelihoodp(z x) of a sensor measurement z given the robot is locatedat the position x. We employ the so-called endpoint model orlikelihood fields [23]. Let zk′ be the k th range measurement ofz re-projected into the map given the robot pose x. Assumingthat the beams are independent and the noise is Gaussian,the endpoint model evaluates the likelihood p(z x) as Ykz ′ d′ k2,(3)p(z x) exp i 2 i2σi

RemissionVegetationConcrete0.60.50.41234567Range [m]Fig. 4. Range and remission data collected by the robot observing eithera concrete surface or vegetation.where d′i is the point in the map which is closest to zi′ . Asdescribed above and in contrast to most existing localizationapproaches, our system does not employ a single grid mapto estimate the pose of the robot. Given our graph-basedstructure, we need to determine a vertex xbase whose mapshould be taken into account for evaluating p(z x). Wedetermine the base node xbase as the pose-graph vertexthat minimizes the distance to x and furthermore guaranteesthat the current location of the robot was observed in themap. This visibility constraint is important to maximizethe overlap between the map and the current observation.Without this constraint, the closest vertex might be outsidea building while the robot is actually inside of it.D. Traversability AnalysisThe correct identification of obstacles is a critical component for autonomous navigation with a robot. Given ourrobotic platform, we need to identify obstacles having aheight just above 3 cm. Such obstacles are commonly described as positive obstacles, as they stick out of the groundsurface the robot travels upon. In contrast to that, negativeobstacles are dips above the maximum traversable height of3 cm and such obstacles should also be avoided by the robot.In the following, we describe the module which detects positive and negative obstacles while at the same time allowingthe robot to drive over manhole covers and grids which mightbe falsely classified as negative obstacles. Furthermore, whilenavigating in urban areas the robot may encounter otherundesirable surfaces, such as lawn. Here, considering onlythe range data is not sufficient, as the surface appears tobe smooth and drivable. Since our platform cannot safelytraverse grass areas, where it might easily get stuck due tothe small caster wheels, we also have to identify such areasto allow the robot to avoid them and thus to reduce the riskof getting stuck while trying to reach a desired location.1) Vegetation Detection: In our implementation, we detectflat vegetation, such as grass, which cannot be reliablyidentified using only range measurements, by consideringthe remission values returned by the laser scanner along withthe range [24]. We exploit the fact that living plants show adifferent characteristic with respect to the reflected intensitythan the concrete surface found on streets.In contrast to Wurm et al. [24], we detect vegetation witha fixed downward looking laser instead of a tilting laser. Thisresults in an easier classification problem, as the range of abeam hitting the presumably flat ground surface correlateswith the incidence angle. Fig. 4 visualizes the data obtainedwith our platform. As can be seen from the image, the twoclasses can be separated by a non-linear function. We chooseto fit a function to the lower boundary of the vegetationmeasurements which allows us to identify measurementswhich are likely to be vegetation with a high efficiency. Theresulting classification accuracy is slightly worse comparedto the original approach but faster and, as can be seen inFig. 5, still sufficient for identifying regions covered byvegetation that should be avoided by the robot.2) Tracking Dynamic Obstacles: To detect moving obstacles in the vicinity of the robot, like pedestrians or bicyclists,we employ a blob tracker based on the 2D range scannerdata. We first accumulate the 2D laser readings in a 2D gridmap for a short time window (about 100 ms in our currentimplementation). In addition to that, we keep a history ofthese maps for a larger amount of time (about 1 s). To findthe dynamic elements in the map we compare the currentmap with the oldest in the history and mark the obstacles thatonly appear in the newer map as dynamic. Then, we filterout those obstacles that appear to be dynamic but that wereoccluded in the older map and are therefore probably falsepositives. In the next step we cluster the dynamic obstaclesinto blobs using a region growing approach. Then, we findcorresponding blobs in the preceding map using a nearestneighbor approach (rejecting neighbors above a predefineddistance). Based on the mean positions of the correspondingblobs we estimate velocities and bounding boxes that arealigned to the movement direction.While this method is relatively simple (and occasionally creates false positives and sometimes wrongly mergesmultiple moving objects into one), it proved to be highlyeffective for the city navigation task. It can be calculated ina highly efficient manner and provides a sufficient movementprediction for avoidance purposes, as can be seen in Fig. 5.3) Detection of 3D Obstacles: Unfortunately, not allobstacles that might block the robot’s path are visible inthe horizontal laser scans. For this reason, we implementeda module that analyzes the scan lines captured by thedownwards facing laser and the mirrored laser beams infront of the robot (see Section III). These lasers provide 3Dinformation about the environment when the robot is moving.In a first step, we perform a filtering on the raw scans toget rid of some false measurements. This especially targetsat spurious points typically returned at the borders of objectsin the form of interpolated point positions between the foreground and the background. These points might create falseobstacles. To detect them, we check for sudden changes indepth which are noticeable as very small viewing angles fromone point in the 2D scan to its immediate neighbors. Thoseborder areas are erased from the scans before performing theobstacle detection procedure.The main part of the obstacle detection process is done byanalyzing only single scan lines, instead of the point cloudwhich is accumulated during driving. To decide whetherpoints in these scan lines measure flat ground or an obstaclethe robot cannot traverse, we analyze how straight the scanlines are and if there are significant discontinuities in there,

Fig. 5.Visualization of the different kinds of detected obstacles (topimage). Blue points mark obstacles that are visible in the horizontal 2Dlaser scanners (areas a and b). Red points mark 3D obstacles that arevisible in the downwards facing laser beams, but not in the 2D laser beams(mainly area c). Green points mark the detected vegetation/grass (area d).The black boxes with the arrows mark detected dynamic obstacles (areab). The remaining small yellow dots visualize the accumulated point cloudfrom the laser measurements. The scene depicts the robot and its plannedtrajectory in an environment with a lawn on the right, a building with atwo-step staircase on the left (see bottom image) and four people movingbehind it.since a flat ground would lead to straight scan lines. Tobe robust to noise, we use polynomial approximations forsmall segments of the scan lines and analyze their incline.Every point that lies in a segment which has an inclineabove a maximum value (10 ) and a height difference abovea maximum allowed step height (3 cm) is reported as apotential obstacle. Note that these parameters arise from thecapabilities of the platform.This heuristic proved to be very effective and has theadvantage of being very efficient for single 2D scans, withoutthe need of integration over a longer period of time. It alsodoes not require information about position changes of therobot, which would be a source of considerable noise. Inaddition to that, there are no strong requirements regardingthe external calibration parameters of the lasers.Unfortunately, there are rare cases where this procedurefails. The main source of failures are false positives onmanhole covers or gutters in the ground. Example images canbe seen in Fig. 6 (top). Since some laser beams go throughthe structure and some not, they appear to be negativeobstacles. We implemented a heuristic to detect those casesby identifying areas with small holes. For this purpose, weextended the method described above and build a heightmap from the accumulated scan points while driving. Forevery scan point, we check if it lies substantially below theFig. 6. Top: Traversable structures that might be detected as negativeobstacles by a naive method, because some laser beams can go throughthem. Bottom: Example case for the obstacle detection module. While thesmall canals on the robot’s right side are classified as negative obstacles, thegutters are identified as traversable even though there are laser measurementsgoing through the holes.estimated height in both directions. This indicates a smallhole. Obstacles close to such holes are ignored, if they arebelow a certain height (10 cm). This approach proved to provide the desired reliability for different settings in which thenaive approach would have reported non-traversable negativeobstacles (see Fig. 6, bottom image, for an example).For every positive obstacle detected by the approachabove, we check if this obstacle also has a correspondingobstacle in its vicinity in the 2D scans from the horizontallasers. If not, the corresponding 3D position is reported asa 3D obstacle. If yes, it is considered to belong to the 2Dobstacle and only stored for a short amount of time. Thereason for this is that our sensor setup does typically notallow us to reobserve a 3D obstacle in regular intervals, sinceit is just seen once while driving by. Therefore, we have tokeep the 3D obstacles in the map for an indefinite amountof time. On the other hand, obstacles observed in the 2Dscanners can be reobserved and therefore do not have to bekept for a long time. This procedure prevents most dynamicobjects (those that are also visible in 2D) from trappingthe robot because it does not notice their disappearance. Anexample regarding the different obstacle types can be seenin Fig. 5.4) Vibration Based Ground Evaluation: While the approach described above allows the robot to identify objectsthat need to be avoided, the ground surface itself needs to betaken into account while driving autonomously. Cobble stonepavement, which can typically be found in the centers of oldEuropean cities leads to a substantial vibration and shaking ofthe platform. Hence, we consider the measurements providedby the IMU to control the speed of the platform basedon the current vibration. If the vibration exceeds a certainlimit, the maximum allowed velocity of the platform is

gradually decreased. As the accuracy of the laser sensors isnot sufficient to classify the smoothness of the surface, therobot has no means to identify whether the surface allowsdriving fast again without inducing vibrations. Hence, wegreedily increase the maximum velocity again after a shortdelay and repeat the entire process.E. PlannerOur planner considers different levels of abstraction tocompute a feasible path for the robot towards a goal location.The architecture consists of three levels. On the highestlevel, only the topology of the environment is considered,i.e., the graph connecting local maps. The intermediate levelemploys Dijkstra’s algorithm on the local maps to calculateway-points which serve as input for the low-level plannerdeveloped by Rufli et al. [25]. This low-level planner actuallycomputes the velocity commands sent to the robot. Notethat by using this hierarchy, we loose the optimality ofthe computed paths. However, as reported by Konolige etal. [19], the resulting paths are only approximately 10 %longer while the time needed to obtain them can actuallybe several orders of magnitude shorter.Given the pose estimates of the SLAM module, ourplanner constructs a topology T represented by a graph.This graph is constructed as follows: Each node xi of thegraph is labeled with its absolute coordinates in the world.Furthermore, each node comes with its local traversabilitymap describing the drivable environment in the neighborhoodof xi which serves as the background information for theplanner. Additionally, each cell in the map encodes the costof driving from xi to the cell. This can be pre-computed efficiently by a single execution of Dijkstra’s algorithm startingfrom xi . We refer to this as the reachability information ofthe map.Two nodes are connected by an edge if there is a path fromone node to the other given the information stored in theirlocal maps. The edge is labeled with the cost for traversingthe path which is determined by planning on the local maps.If such a path cannot be found, we assign a cost of infinityto this edge. Otherwise, we assign to the edge the costreturned by the intermediate level planner which is typicallyproportional to the length of the path. Yet, in contrast tothe straight-line distance, the cost better reflects the localcharacteristics of the environment. By this procedure, whichis carried out once as a pre-processing step, the planner willconsider the real costs for the robot to traverse the edgeinstead of only considering the Euclidean distance. Note thatthe set of edges contained in the topology graph T in generaldiffers from the set of constraints C generated by the SLAMmodule. The topology graph exhibits a denser connectivityas can be seen in Fig. 7.While driving autonomously, the robot may encounterunforeseen obstacles, e.g., a passage might be blocked bya construction site or parked cars. Our planner handles suchsituations by identifying the edges in the topology whichare not traversable in the current situation. Those edgesare temporarily marked with infinite costs which allows theFig. 7. Left: Partial view of the pose-graph with its constraints used forestimating the poses. Right: The same view of the topology graph generatedby the planner shows that this graph typically features a denser connectivity.hierarchical planner framework to determine another path tothe goal location.Planning a path from the current location of the robottowards a desired goal location works as follows. First, weneed to identify the nodes or maps in T which belong to thecurrent position of the robot and the goal. To this end, werefer to the reachability information of the maps. We selectthe maps with the shortest path from the center of the map tothe robot and the goal, respectively. Given the robot node andthe goal node, the high level planner carr

car navigation, the system described in this paper has been developed to enable mobile robots to perform pedestrian-like autonomous navigation in urban environments with many types of dynamic objects like pedestrians, cyclists, or pets. A long-term experiment about the robustness of an in-door navigation system was recently presented by Marder-

Related Documents:

Bruksanvisning för bilstereo . Bruksanvisning for bilstereo . Instrukcja obsługi samochodowego odtwarzacza stereo . Operating Instructions for Car Stereo . 610-104 . SV . Bruksanvisning i original

10 tips och tricks för att lyckas med ert sap-projekt 20 SAPSANYTT 2/2015 De flesta projektledare känner säkert till Cobb’s paradox. Martin Cobb verkade som CIO för sekretariatet för Treasury Board of Canada 1995 då han ställde frågan

service i Norge och Finland drivs inom ramen för ett enskilt företag (NRK. 1 och Yleisradio), fin ns det i Sverige tre: Ett för tv (Sveriges Television , SVT ), ett för radio (Sveriges Radio , SR ) och ett för utbildnings program (Sveriges Utbildningsradio, UR, vilket till följd av sin begränsade storlek inte återfinns bland de 25 största

Hotell För hotell anges de tre klasserna A/B, C och D. Det betyder att den "normala" standarden C är acceptabel men att motiven för en högre standard är starka. Ljudklass C motsvarar de tidigare normkraven för hotell, ljudklass A/B motsvarar kraven för moderna hotell med hög standard och ljudklass D kan användas vid

LÄS NOGGRANT FÖLJANDE VILLKOR FÖR APPLE DEVELOPER PROGRAM LICENCE . Apple Developer Program License Agreement Syfte Du vill använda Apple-mjukvara (enligt definitionen nedan) för att utveckla en eller flera Applikationer (enligt definitionen nedan) för Apple-märkta produkter. . Applikationer som utvecklas för iOS-produkter, Apple .

all other robots, but the algorithm will not converge if some robots lose communication with other robots [10]. The task-assignment-based algorithms work well on solv-ing multi-robot formation problems if the number of robots is not large. The major limitation is they require the robots to know tasks in the global sense. Therefore, they are .

that robots may reduce employment and wages, and that the local labor market effects of robots can be estimated by regressing the change in employment and wages on the exposure to robots in each local labor market—defined from the national penetration of robots into each industry and the local distribution of employment across industries.Cited by: 1480Publish Year: 2020Author:

Introduction to Introduction to Autonomous Mobile Robots Roland Siegwart and Illah R. Nourbakhsh Mobile robots range from the teleoperated Sojourner on the Mars Pathfinder mission to cleaning robots in the Paris Metro. Introduction to Autonomous Mobile Robots offers

Robots are distinct from normal computers which don't have a physical body attached to them. Most of the robots we know today are quite unglamorous devices, like robotic welders. So while all robots are machines not all machines are robots. Robots collect sensor data, try

and development of surgical robots. We introduce research and development of laparoscope-holder robots, master-slave robots and hand-held robotic forceps. Then, we discuss future directions for surgical robots. For robot hardware, snake like flexible mechanisms for single-port access surgery (SPA) and NOTES (Natural Orifice

och krav. Maskinerna skriver ut upp till fyra tum breda etiketter med direkt termoteknik och termotransferteknik och är lämpliga för en lång rad användningsområden på vertikala marknader. TD-seriens professionella etikettskrivare för . skrivbordet. Brothers nya avancerade 4-tums etikettskrivare för skrivbordet är effektiva och enkla att

Den kanadensiska språkvetaren Jim Cummins har visat i sin forskning från år 1979 att det kan ta 1 till 3 år för att lära sig ett vardagsspråk och mellan 5 till 7 år för att behärska ett akademiskt språk.4 Han införde två begrepp för att beskriva elevernas språkliga kompetens: BI

**Godkänd av MAN för upp till 120 000 km och Mercedes Benz, Volvo och Renault för upp till 100 000 km i enlighet med deras specifikationer. Faktiskt oljebyte beror på motortyp, körförhållanden, servicehistorik, OBD och bränslekvalitet. Se alltid tillverkarens instruktionsbok. Art.Nr. 159CAC Art.Nr. 159CAA Art.Nr. 159CAB Art.Nr. 217B1B

produktionen sker på ett reproducerbart sätt. Alla geler som produceras testas därför för att kontrollera att de upprätthåller den kvalité som krävs för produktion av läkemedel. De biologiska läkemedlen kan sorteras på olika egenskaper och för geler som separerar med

Chevrolet Equinox and GMC Terrain Navigation System (Include Mex) - 2012 Black plate (4,1) 4 Infotainment System Overview Read this manual thoroughly to become familiar with how the navigation system operates. The navigation system includes navigation and audio functions. Keeping your eyes on the road and your mind on the drive is important for .

Navigation System on page 2-2. J. ROUTE Key. See “Hard Keys” under Using the Navigation System on page 2-2. K. MENU Key. See “Hard Keys” under Using the Navigation System on page 2-2. L. TILT Key. See “Hard Keys” under Using the Navigation System on page 2-2. Getting Started Before you begi

Navigation User Manual . 1.Home Menu. First Page . Second Page Third Page . 2. How to use Navigation. Enter the second menu. Click icon “Navigation”,access into Navigation function. Please refer to the instruction manual for navigation software . 1 2 . Common apps Car info . Home. Menu . 1.Player .

Button. See Navigation Audio System on page 3-2 for more information. P. Map DVD Slot. See “Nav (Navigation)” under Configure Menu on page 2-26for information on how to load/unload a map DVD. Q. f (Tune/Sound) knob. See Navigation Audio System on page 3-2for more information. R. AUDIO Key. See Navigation Audio System on page 3-2for more .File Size: 1MB

only inertial navigation system. Objective of the proposal: The objective of the proposal is a combination of the existing inertial navigation system (INS) with global position system (GPS) for more accurate navigation of the launchers. The project's product will be navigation algorithms software package and hardware units.

American Revolution American colonies broke away from Great Britain Followed the ideas of John Locke –they believed Britain wasn’t protecting the citizen’s rights 1st time in modern history ended a monarchy’s control and created a republic Became a model for others French Revolution Peasants tired of King Louis XVI taxing them and not the rich nobles Revolted and .