Autonomous Sensor And Action Model Learning For Mobile

2y ago
23 Views
3 Downloads
1.54 MB
64 Pages
Last View : 1d ago
Last Download : 2m ago
Upload by : Wren Viola
Transcription

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsAutonomous Sensor and Action ModelLearning for Mobile RobotsDaniel StrongerLearning Agents Research GroupDepartment of Computer SciencesThe University of Texas at AustinDissertation DefenseJune 19, 2008Daniel StrongerAutonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsModel Learning for Autonomous Robots Goal: To increase the effectiveness of autonomous mobilerobots Plan: Enable mobile robots to autonomously learnmodels of their sensors and actions.Daniel StrongerAutonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsAction and Sensor Models Mobile robots rely on models of their actions and sensorsAgentSensationsObservations Range Finder ReadingsCamera ImageControl PolicySensorModelThrottle PositionAction Model Car PositionBrake PositionCar VelocitySteering PositionWorld StateActionDaniel StrongerAutonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsAction and Sensor Models Mobile robots rely on models of their actions and sensorsAgentSensationsObservations Range Finder ReadingsCamera ImageControl PolicySensorModelThrottle PositionAction Model Car PositionBrake PositionCar VelocitySteering PositionWorld StateActionDaniel StrongerAutonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsAction and Sensor Models Mobile robots rely on models of their actions and sensorsAgentSensationsObservations Range Finder ReadingsCamera ImageControl PolicySensorModelThrottle PositionAction Model Car PositionBrake PositionCar VelocitySteering PositionWorld StateActionDaniel StrongerAutonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsOverview Action and sensor models are typically calibratedmanually: laborious and brittle Robot in novel environment might encounter unfamiliarterrain or lighting conditions Parts may wear down over time Goal: Start without accurate estimate of either model Technique is implemented and tested in: One-dimensional scenario: Sony Aibo ERS-7 Aibo in two-dimensional area Second robotic platform: an autonomous carDaniel StrongerAutonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsLearning in One DimensionLearning in Two Dimensions: ChallengesAddressing the ChallengesResultsOutline1 Introduction2 Model Learning on a Sony AiboLearning in One DimensionLearning in Two Dimensions: ChallengesAddressing the ChallengesResults3 Model Learning on an Autonomous CarThe Autonomous CarMethodsExperimental Results4 ConclusionsRelated WorkSummary and Future WorkDaniel StrongerAutonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsLearning in One DimensionLearning in Two Dimensions: ChallengesAddressing the ChallengesResultsTest-bed Robotic PlatformDaniel StrongerAutonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsLearning in One DimensionLearning in Two Dimensions: ChallengesAddressing the ChallengesResultsExample: Learning in One Dimension Consider a setting with the following properties: The set of world states: Continuous, one-dimensional One sensor: Readings correspond to world states Range of actions: Correspond to rates of change Actions and sensors suffer from random noiseAgentActionSensorVel.World StateDaniel Stronger1 D WorldAutonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsLearning in One DimensionLearning in Two Dimensions: ChallengesAddressing the ChallengesResultsExperimental Setupsensor inputdistance Sensor model maps landmark height in image todistance Mapping derived from camera specs not accurate Action model maps parametrized walking action, W(x),to velocity Parameter x corresponds to attempted velocity, notaccurate because of friction and joint behaviorDaniel StrongerAutonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsLearning in One DimensionLearning in Two Dimensions: ChallengesAddressing the ChallengesResultsThe Sensor and Action Models Each model informs an estimate of the world state: The sensor model maps an observation to a world statexs (tk ) S(obsk ) The action model maps an action C(t) to a velocityxa (t) x (0) ZtA(C(s)) ds0 Goal is to learn two model functions, A and S Use polynomial regression as a function approximatorDaniel StrongerAutonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsLearning in One DimensionLearning in Two Dimensions: ChallengesAddressing the ChallengesResultsLearning a Sensor Model Assume a given action model is accurate Consider ordered pairs (obsk , xa (tk )) Fit polynomial to dataData PointsSensor Model (S)xaobsDaniel StrongerAutonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsLearning in One DimensionLearning in Two Dimensions: ChallengesAddressing the ChallengesResultsLearning an Action Model Assume a given sensor model is accurate Plot xs (t) against timeData PointsxstDaniel StrongerAutonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsLearning in One DimensionLearning in Two Dimensions: ChallengesAddressing the ChallengesResultsLearning an Action Model Assume a given sensor model is accurate Plot xs (t) against timexstDaniel StrongerAutonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsLearning in One DimensionLearning in Two Dimensions: ChallengesAddressing the ChallengesResultsLearning an Action Model (cont.) Compute action model that minimizes the error Problem equivalent to another multivariate regressionData PointsBest Fit, withSlope A(C(t))xstDaniel StrongerAutonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsLearning in One DimensionLearning in Two Dimensions: ChallengesAddressing the ChallengesResultsLearning Both Models Simultaneously Given very little to start with, learn both models Maintain both state estimates, xs (t) and xa (t) Each one is used to fit the other model Both models grow in accuracy through bootstrappingprocess Use weighted regression More recent points weighted higher wi γ n i , γ 1 Can still be computed incrementallyDaniel StrongerAutonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsLearning in One DimensionLearning in Two Dimensions: ChallengesAddressing the ChallengesResultsLearned Models Measure actual models with stopwatch and ruler Use optimal scaling to evaluate learned modelsMeasured Sensor Model:Learned Sensor Model:Dist.Vel.Measured Action Model:Learned Action Model:Action CommandBeacon Height Sensor model average error: 70.4 mm, 2.9% of range Action model average error: 29.6 mm/s, 4.9% of rangeDaniel StrongerAutonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsLearning in One DimensionLearning in Two Dimensions: ChallengesAddressing the ChallengesResultsLearning in Two Dimensions Robot learns while traversing rectangular field Combinations of forward, sideways, and turning motion Field has four known color-coded cylindrical landmarksDaniel StrongerAutonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsLearning in One DimensionLearning in Two Dimensions: ChallengesAddressing the ChallengesResultsLearning in Two Dimensions Robot learns while traversing rectangular field Combinations of forward, sideways, and turning motion Field has four known color-coded cylindrical landmarksDaniel StrongerAutonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsLearning in One DimensionLearning in Two Dimensions: ChallengesAddressing the ChallengesResultsLearning in Two Dimensions Robot learns while traversing rectangular field Combinations of forward, sideways, and turning motion Field has four known color-coded cylindrical landmarksdDaniel StrongerAutonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsLearning in One DimensionLearning in Two Dimensions: ChallengesAddressing the ChallengesResultsAibo in Two Dimensions: Sensor Model Sensor model maps distance to landmark to distributionof observed height in image Model includes polynomial function, f (dist(s)) Also, the variance of random noise added to imageheights Additional variance parameter for landmark’s horizontalangleDaniel StrongerAutonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsLearning in One DimensionLearning in Two Dimensions: ChallengesAddressing the ChallengesResultsAibo in Two Dimensions: Action Model Actions correspond to attempted combinations of forward,sideways, and turning velocities Attempted velocities control step size, direction Inaccuracies due to friction, joint behavior Action model maps attempted velocities to actualvelocities Discrete set of 40 actions usedDaniel StrongerAutonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsLearning in One DimensionLearning in Two Dimensions: ChallengesAddressing the ChallengesResultsChallenges This problem presents many challenges: How do we incorporate actions and sensations into theworld state? For state estimation, Kalman filtering How do we determine what models are most consistentwith the observed data? For maximum likelihood parameter estimation, theExpectation-Maximization (EM) algorithmDaniel StrongerAutonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsLearning in One DimensionLearning in Two Dimensions: ChallengesAddressing the ChallengesResultsKalman Filtering Kalman filter maintains successive state estimates Represents mean and covariance of distributionLandmarkStateUncertaintyDaniel StrongerAutonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsLearning in One DimensionLearning in Two Dimensions: ChallengesAddressing the ChallengesResultsKalman Filtering Kalman filter maintains successive state estimates Represents mean and covariance of distributionLandmarkDistance ObservationStateUncertaintyDaniel StrongerAutonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsLearning in One DimensionLearning in Two Dimensions: ChallengesAddressing the ChallengesResultsKalman Filtering Kalman filter maintains successive state estimates Represents mean and covariance of distributionLandmarkObservationUpdateDaniel StrongerAutonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsLearning in One DimensionLearning in Two Dimensions: ChallengesAddressing the ChallengesResultsKalman Filtering Kalman filter maintains successive state estimates Represents mean and covariance of distributionLandmarkActionDaniel StrongerAutonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsLearning in One DimensionLearning in Two Dimensions: ChallengesAddressing the ChallengesResultsKalman Filtering Kalman filter maintains successive state estimates Represents mean and covariance of distributionLandmarkActionUpdateDaniel StrongerAutonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsLearning in One DimensionLearning in Two Dimensions: ChallengesAddressing the ChallengesResultsMaximum Likelihood Estimation Known: robot’s actions and observations Hidden variables: world state over time Goal is to learn system parameters: action and sensormodels Approach: Find models with maximum likelihood ofproducing observed data with the EM Algorithm E-step: Given models, find probability distribution overworld state M-step: Given distribution, find maximum likelihoodmodels Alternate until convergenceDaniel StrongerAutonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsLearning in One DimensionLearning in Two Dimensions: ChallengesAddressing the ChallengesResultsMaximum Likelihood Estimation Known: robot’s actions and observations Hidden variables: world state over time Goal is to learn system parameters: action and sensormodels Approach: Find models with maximum likelihood ofproducing observed data with the EM Algorithm E-step: Given models, find probability distribution overworld state M-step: Given distribution, find maximum likelihoodmodels Alternate until convergenceDaniel StrongerAutonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsLearning in One DimensionLearning in Two Dimensions: ChallengesAddressing the ChallengesResultsMaximum Likelihood Estimation Known: robot’s actions and observations Hidden variables: world state over time Goal is to learn system parameters: action and sensormodels Approach: Find models with maximum likelihood ofproducing observed data with the EM Algorithm E-step: Given models, find probability distribution overworld state M-step: Given distribution, find maximum likelihoodmodels Alternate until convergenceDaniel StrongerAutonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsLearning in One DimensionLearning in Two Dimensions: ChallengesAddressing the ChallengesResultsLearning the Robot’s Models The E-step determines a probability distribution over therobot’s pose over time The Extended Kalman Filter and Smoother (EKFS)approximates these distributions as multivariate Gaussians Definition of M-step: New parameters maximize expectedlog likelihood of observations with respect to E-stepdistribution Adapting the M-step to learn these models is acontribution of this workDaniel StrongerAutonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsLearning in One DimensionLearning in Two Dimensions: ChallengesAddressing the ChallengesResultsAdapting the M-step Given E-step distribution p̂ and observations O, findparameters λ that maximize Ep̂ [log p(O λ)] Equivalently, find the action model, a, that maximizes:T ZXt 1 st 1 ,stp̂(st 1 , st ) log p(st st 1 , a) dst 1 dst {z}{z}action likelihoodexpected and the sensor model, b, that maximizes:T ZXp̂(st ) log p(ot st , b) dst {z }t 1 {z }observation likelihoodstexpectedDaniel StrongerAutonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsLearning in One DimensionLearning in Two Dimensions: ChallengesAddressing the ChallengesResultsLearning the Sensor Model According to M-step, must find sensor model b thatRmaximizesPTt 1 stp̂(st ) log p(ot st , b) dst Equivalently, find sensor model function f that minimizes:ZTXt 1p̂(st ) (f (dist(st )) ot )2 dst{z} {z }squared errorstweighted mean Minimize the error with weighted polynomial regression Solution approximated by drawing samples from p̂(st ) New variances are model’s weighted mean square errors Additional derivation yields new velocities for each actionDaniel StrongerAutonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsLearning in One DimensionLearning in Two Dimensions: ChallengesAddressing the ChallengesResultsExperimental Validation Experiments were performed on the Aibo and in simulation Simulator models the robot’s pose over time with noisyactions and observations Random actions were chosen with certain constraints: Each chosen action was executed for five seconds at a time The robot stays on the field Actions are evenly represented Actual action and sensor models were measured Compared to learned models Time for data collection: 25 minutes; Learning on realworld data:10 minutes; On simulated data: 1 hourDaniel StrongerAutonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsLearning in One DimensionLearning in Two Dimensions: ChallengesAddressing the ChallengesResultsThe Learned Sensor Model40Starting ModelAnalytical Model MeasuredModelLearned ModelMeasuredModelHeight in Image ticalModel501500200025003000Landmark Distance (mm)Daniel Stronger35004000Autonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsLearning in One DimensionLearning in Two Dimensions: ChallengesAddressing the ChallengesResultsLearned Standard DeviationsStd. DevReal Height (pix)Real Angle (rad)Sim. Height (pix)Sim. Angle arned1.690.0120.9800.474 Error in real angle standard deviation likely caused byrelative accuracy of angle observationsDaniel StrongerAutonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsLearning in One DimensionLearning in Two Dimensions: ChallengesAddressing the ChallengesResultsThe Learned Action ModelVelocityReal AngularSim. ForwardsSim. SidewaysSim. AngularAvg. Error0.135 rad/s18.34 mm/s23.06 mm/s0.086 rad/sCompared to Range3.2%2.2%3.2%2.7% By comparison, attempted angular velocities have averageerror of 0.333 rad/s or 7.9%Daniel StrongerAutonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsThe Autonomous CarMethodsExperimental ResultsOutline1 Introduction2 Model Learning on a Sony AiboLearning in One DimensionLearning in Two Dimensions: ChallengesAddressing the ChallengesResults3 Model Learning on an Autonomous CarThe Autonomous CarMethodsExperimental Results4 ConclusionsRelated WorkSummary and Future WorkDaniel StrongerAutonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsThe Autonomous CarMethodsExperimental ResultsThe Autonomous Car Self-driving car provides many challenges forautonomous model learning Actions lead to accelerations, angular velocity: Throttle, brake, and steering position Sensors provide information about pose and map: Three-dimensional LIDAR Again start without accurate estimate of either modelDaniel StrongerAutonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsThe Autonomous CarMethodsExperimental ResultsAction Model Example model of acceleration, a:at C1 C2 throttlet C3 velt C4 velt braket And angular velocity, ω:ωt C1 velt C2 velt steertDaniel StrongerAutonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsThe Autonomous CarMethodsExperimental ResultsThree-Dimensional LIDAR The Velodyne LIDAR sensor: 64 lasers return distance readings Each laser is at a different vertical angle and differenthorizontal offset Unit spins around vertical axis at 10HzDaniel StrongerAutonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsThe Autonomous CarMethodsExperimental ResultsAutonomous Car Challenges Structure of environment is unknown Component of world state High bandwidth sensor: perceptual redundancySensoryInputInverseSensorModelWorld StateEstimateActionKnowledgeActionModel Plan: Learn the sensor model first Assumption: Nearby angles have similar distancesDaniel StrongerAutonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsThe Autonomous CarMethodsExperimental ResultsLearning the Sensor Model Top view of uncalibrated laser projections15V: -13.0V: -13.5V: -14.0V: -14.510y (meters)50-5-10-15-20-15-10-505101520x (meters)Daniel StrongerAutonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsThe Autonomous CarMethodsExperimental ResultsLearning the Sensor Model Consider pairs of vertically adjacent lasers15V: -13.5V: -14.010y (meters)50-5-10-15-20-15-10-505101520x (meters)Daniel StrongerAutonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsThe Autonomous CarMethodsExperimental ResultsLearning the Sensor Model Normalized cross-correlation identifies the angle difference15V: -13.5V: -14.010y (meters)50-5-10-15-20-15-10-505101520x (meters)Daniel StrongerAutonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsThe Autonomous CarMethodsExperimental ResultsLearning the Sensor Model Process yields a relative horizontal angle for each laser15V: -13.0V: -13.5V: -14.0V: -14.510y (meters)50-5-10-15-20-15-10-505101520x (meters)Daniel StrongerAutonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsThe Autonomous CarMethodsExperimental ResultsIdentifying Car Motion Matching scans at consecutive times yields the car’smotion Transformation is determined by Iterative Closest PointScan MotionTime t:Time t t:Car MotionDaniel StrongerAutonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsThe Autonomous CarMethodsExperimental ResultsLearning the Action Model Given car motion estimates from ICP: Determine overall orientation of Velodyne Define “forwards” to be the car’s median direction Combine with action command to train action model Learned action model yields more accurate car motionestimates New motion estimates are used as starting points for ICP initerative procedureDaniel StrongerAutonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsThe Autonomous CarMethodsExperimental ResultsExperimental Setup Car collected data while driving autonomously for 200seconds Sensor model evaluated by comparison to ground truth: Horizontal angles calibrated manually by Velodyne For ground truth motions, Applanix sensor was used: Combined GPS and inertial motion sensor Ground truth accelerations and angular velocitiescompared to action model outputDaniel StrongerAutonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsThe Autonomous CarMethodsExperimental ResultsLearned Sensor Model110105Horizontal angle1009590858075Learned AnglesTrue Angles70010203040Laser index506070 Average horizontal angle error is 0.54 , 3.0% of rangeDaniel StrongerAutonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsThe Autonomous CarMethodsExperimental ResultsLearned Action Model20.20.18Angular velocity (rad/s)Acceleration (m/s2)10-1-2-3-4125Action ModelGround Truth130135140145Action ModelGround Truth0.160.140.120.10.080.060.040.020150Time (seconds)-0.02125130135140145Time (seconds) Average acceleration error is 0.39m/s2 , 6.8% of range Average angular velocity error is 0.74 /s, 6.4% of rangeDaniel StrongerAutonomous Sensor and Action Model Learning150

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsRelated WorkSummary and Future WorkOutline1 Introduction2 Model Learning on a Sony AiboLearning in One DimensionLearning in Two Dimensions: ChallengesAddressing the ChallengesResults3 Model Learning on an Autonomous CarThe Autonomous CarMethodsExperimental Results4 ConclusionsRelated WorkSummary and Future WorkDaniel StrongerAutonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsRelated WorkSummary and Future WorkRelated Work Developmental Robotics: [Pierce and Kuipers, ’97; Weng et al., ’01; Oudeyer et al.,’04; Olsson et al., ’06] Learning a sensor model: [Tsai, ’86; Moravec and Blackwell, ’93; Hahnel et al., ’04] Learning an action model: [Roy and Thrun, ’99; Martinelli et al., ’03; Duffert andHoffmann, ’05] Dual estimation for Kalman filters: [Ghahramani and Roweis, ’99; Briegel and Tresp, ’99; deFreitas et al., ’99]Daniel StrongerAutonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsRelated WorkSummary and Future WorkSummary Developed novel methodology that enables a mobile robotto autonomously learn action and sensor models Method validated on: Sony Aibo in one-dimensional scenario Aibo and simulation in two dimensions Autonomous carDaniel StrongerAutonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsRelated WorkSummary and Future WorkFuture Work Adapt method to other robots, more detailed models Explore possibilities for learning the features Learn about shapes, affordances of environmental objects Incorporate curiosity mechanismsDaniel StrongerAutonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsRelated WorkSummary and Future WorkQuestions?40Starting ModelAnalytical Model MeasuredModelLearned ModelMeasuredModelHeight in Image ticalModel50150020002500300035004000Landmark Distance (mm) Thanks: UT Austin Villa and Austin Robot TechnologyDaniel StrongerAutonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsRelated WorkSummary and Future WorkAibo in One Dimension: Additional Results Tried three different functions for the initial action modelestimate, A0A0(c) cA0(c) Sgn(c)Daniel StrongerA0(c) 1Autonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsRelated WorkSummary and Future WorkAibo in One Dimension: Additional Results Tried three different functions for the initial action modelestimate, A0 Ran 15 trials on each starting point Recorded number of successes, average errors Even with no information (A0 (c) 1), success in 10/15trialsA0A0 (c) cA0 (c) Sgn(c)A0 (c) 1Sensor Model (mm)70.4 13.985.3 24.588.6 11.5Daniel StrongerAction Model (mm/s)29.6 12.431.3 9.227.3 6.2Success Rate15/1515/1510/15Autonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsRelated WorkSummary and Future WorkAdditional Results: EM in one dimension Goal: Apply EM-based algorithm to Aibo inone-dimensional domain Action model: Table-based function from actions toforwards velocities Sensor model: Polynomial from landmark distance toimage height Results: Average sensor model error: 0.83 pixels Average action model error: 22.1 mm/sDaniel StrongerAutonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsRelated WorkSummary and Future WorkLearning the Action Model In the M-step, for each action, A, must maximize:X Zt:c(t) Ap̂(st 1 , st ) log p(st st 1 , a) dst 1 dstst 1 ,st Action model is determined by µA , the mean posedisplacement caused by action A over one time-step. Derivation yields an expression for µ A , the maximizingdisplacement:X Z1p̂(st 1 , st ) d (st 1 , st ) dst 1 dst {z } {t : c(t) A} st 1 ,stt:c(t) ADaniel StrongerdisplacementAutonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsRelated WorkSummary and Future WorkAibo in 1D: Learning Both Models Ramping up processStA 0 Att 0t tstartt 2tstartDaniel StrongerAutonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsRelated WorkSummary and Future WorkAibo in 1D: Estimates Converge Over time, xs (t) and xa (t) come into stronger agreementx(t)Time (s)Daniel StrongerAutonomous Sensor and Action Model Learning

IntroductionModel Learning on a Sony AiboModel Learning on an Autonomous CarConclusionsRelated WorkSummary and Future WorkAibo in 1D: Learning Curves Average fitness of mod

Introduction Model Learning on a Sony Aibo Model Learning on an Autonomous Car Conclusions Model Learning for Autonomous Robots Goal: To increase the effectiveness of autonomous mobile robots Plan: Enable mobile robots t

Related Documents:

ZigBee, Z-Wave, Wi -SUN : . temperature sensor, humidity sensor, rain sensor, water level sensor, bath water level sensor, bath heating status sensor, water leak sensor, water overflow sensor, fire sensor, cigarette smoke sensor, CO2 sensor, gas s

Page 2 Autonomous Systems Working Group Charter n Autonomous systems are here today.How do we envision autonomous systems of the future? n Our purpose is to explore the 'what ifs' of future autonomous systems'. - 10 years from now what are the emerging applications / autonomous platform of interest? - What are common needs/requirements across different autonomous

WM132382 D 93 SENSOR & 2 SCREWS KIT, contains SENSOR 131856 WM132484 B 100 SENSOR & 2 SCREWS KIT, contains SENSOR 131272 WM132736 D 88 SENSOR & HARNESS, contains SENSOR 131779 WM132737 D 100 SENSOR & 2 SCREWS KIT, contains SENSOR 131779 WM132739 D 100 SENSOR & 2 SCREWS KIT, contains SENSOR 132445 WM147BC D 18 RELAY VLV-H.P.-N.C., #WM111526

4. Sensor Settings pane—Set the sensor parameters in this pane 5. Status bar—Shows whether the sensor is connected, if a software update is available, and if the sensor data is being recorded to a file 6. Live Sensor Data controls—Use these controls to record, freeze, and play real-time sensor data, and to refresh the sensor connection

A sensor bias current will source from Sensor to Sensor- if a resistor is tied across R BIAS and R BIAS-. Connect a 10 kΩ resistor across Sensor and Sensor- when using an AD590 temperature sensor. See STEP 4 Sensor - Pins 13 & 14 on page 8. 15 16 R BIAS R BIAS-SENSOR BIAS CURRENT (SW1:7, 8, 9, 10)

Laser Sensor Microphone Sensor Projectile Sensor Valve Sensor Photogate Sensor Motion/Distance Sensor Camera Shutter Sync Sensor Clip Sensor Multi-Flash Board 2. . Camera Axe 5 in a much cheaper package for the DIY/Maker crowd. Hardware The top of the both versions has a display screen, a number of buttons, a power switch (not on shield .

Laser Sensor Microphone Sensor Projectile Sensor Valve Sensor Photogate Sensor Motion/Distance Sensor Camera Shutter Sync Sensor Clip Sensor Multi-Flash Board. . Camera Axe 5 in a much cheaper package for the DIY/Maker crowd. Hardware The top of the both versions has a display screen, a number of buttons, a power switch (not on shield

The present resource book is designed as a supplement to Peter Roach’s (2010) textbook English Phonetics and Phonology: A Practical Course and may be used to accompany lecture courses on English Phonetics at university level. It is equally suitable for self‐study and for in‐class situation