IntelliSAR - University Of Massachusetts Amherst

3y ago
48 Views
2 Downloads
1.76 MB
13 Pages
Last View : 9d ago
Last Download : 2m ago
Upload by : Evelyn Loftin
Transcription

IntelliSARDerek SunCollege of EngineeringUniversity of Massachusetts, AmherstAmherst, USAdereksun@umass.eduYong LiCollege of EngineeringUniversity of Massachusetts, AmherstAmherst, USAyonli@umass.eduAbstract —Rescue teams face many challenges whentraversing unknown environments. This can be dangerous forboth the rescuer and the rescue, so fully understanding thesituation is crucial in preventing unnecessary risks. The goal ofour project is to provide rescue teams with the ability to remotelyexamine the situation and the environment to reduce possiblerisks or dangers and increase the rescue team’s efficiency. Thispaper introduces IntelliSAR, a ground-based robot with artificialintelligence capabilities aimed at supporting post-disaster searchand rescue operations. Our system leverages machine learningand computer vision to perform object detection and assist in thetask of identifying and locating victims. IntelliSAR supportssemi-autonomous navigation as well as manual, remotenavigation. The robot’s night vision camera and sturdy chassisallow it to remain effective in low-light and rugged environments.In addition, the robot is equipped with a temperature andhumidity sensor and an ultrasonic sensor. The data from all thesensors are wirelessly transmitted back to an external PC viaWi-Fi and serve to provide the user with a wide range ofreal-time environmental information.Keywords—IntelliSAR, search and rescue, object detection,semi-autonomous navigationI.I NTRODUCTIONHumans have been fighting against natural disasters forthousands of years. Cave landslides, flash floods, mudslides,and earthquakes are just a few examples of impactful anddangerous natural disasters. The number of earthquakes with adeath toll in the 21st century will increase, and the number ofpeople killed by earthquakes will exceed that of the past.“Combining fatalities caused by the background rate withfatalities caused by catastrophic earthquakes ( 100,000fatalities) indicates global fatalities in the 21st century will be2.57 0.64 million if the average post-1900 death toll forcatastrophic earthquakes (193,000) is assumed” [ 1 ].When faced with these dangerous natural disasters, it isimportant for search and rescue tasks to be carried out asefficiently as possible. The first step in starting a rescue after anatural disaster is to search for victims [ 2 ]. Searching forvictims in a post-disaster situation requires accuracy, speed,and flexibility. Rescuers must be able to get reliableinformation on the post-disaster situation throughArthur ZhuCollege of EngineeringUniversity of Massachusetts, AmherstAmherst, USAtianyezhu@umass.eduenvironmental observation and testing. Information lays thefoundation for correct and efficient implementation ofsubsequent rescue work. Rescue workers, materials, andrescue facilities must be able to quickly arrive and set upfollowing a disaster. Rescuers can then begin the search andrescue work as soon as possible, thereby gaining valuabletime, improving rescue efficiency, and helping more victimssurvive. Flexibility is necessary due to the unpredictablesituations and harsh environments that may be present in apost-disaster situation.In order to accommodate the aforementionedcharacteristics, our team developed IntelliSAR; an intelligent,ground-based robot designed to be used in post-disaster searchand rescue situations. Search and rescue focus on locating andextracting people trapped in a collapsed or damaged structure.In these types of situations, rescuers are under extreme timepressure; after 48 hours, the mortality rate drastically increasesdue to lack of air, food, water, and medical treatment.Attempting to rescue victims can be as dangerous as the initialdisaster for both the victim and rescuer [ 3 ]. In such conditions,lightweight and intelligent robots can greatly benefit searchand rescue initiatives by exploring ahead of rescue teams andreporting conditions that may be hazardous [ 4 ]. IntelliSARwas designed with this goal of supporting rescue teams inmind. Through a user-friendly web application, the operatorcan remotely control IntelliSAR, view live temperature andhumidity data, and view a live, night vision enabled videofeed. Additionally, IntelliSAR is able to perform objectdetection and assist in the task of identifying and locatingvictims through the use of machine learning and computervision techniques. All this information is relayed to theoperator through the web application and allows rescue teamsto thoroughly examine the post-disaster situation withoutexposing themselves to potential dangers.The rest of the paper is organized as follows. Section 2discusses related work. Section 3 discusses the details andspecifications of IntelliSAR’s design. Section 4 discussesimplementation, results, and analysis of our prototype. Section5 describes and shows how the IntelliSAR system would havelooked had we completed it. Section 6 discusses the project1

management aspects and our team dynamic. Section 7concludes the paper with a summary of our prototype andproject results.II.R ELATED W ORKSearch and rescue robots provide numerous benefits fordisaster response and have been used following severalhigh-profile disasters such as 9/11 and Hurricane Katrina [ 5 ].These robots are constantly being developed and improvedupon in order to better enhance search and rescue efforts.A few examples of modern search and rescue robots arethe Inuktun VGTV-Xtreme [ 6 ], iRap Robot [ 7 ], and iRobotPackBot [ 8 ].The iRap Robot, shown in figure 2, was designed andcreated for the Robocup Rescue 2018 competition. This robotfeatures remote exploration, motion detection, 2D mapgeneration, thermal imaging, and high maneuverability. TheiRap Robot’s dimensions are 100x60x60 cm, but it can reach100x60x200 cm when standing up. This robot was reported tohave cost approximately 30,000 USD to create, with the bulkof it spent on the 6-axis robot arm [ 7 ]. The iRap Robot’sremote exploration and object/motion detection features aresimilar to IntelliSAR, but the iRap Robot focuses more oncompetitive maneuverability and robustness. Some of thedrawbacks of their approach are the high cost and the largeand bulky chassis.Figure 1. Inuktun VGTV-Xtreme [ 6 ]The Inuktun VGTV-Xtreme, shown in figure 1, wasoriginally designed for industrial inspection purposes, but wasadapted to fit a search and rescue role. This robot was firstused in 2001 during the World Trade Center disaster and wentthrough several design improvements following that. Themain features of the Inuktun VGTV-Xtreme are the compactsize, remote video feed, and considerable maneuverabilityprovided by the tracked design. However, it was confirmed tohave a short battery life of less than ten minutes during the2005 La Conchita Mudslides. The Inuktun VGTV-Xtreme wasnot fitted with any semi-autonomous or autonomous drivingcapabilities because of its early design. Despite its relativelystraightforward and simple design, the Inuktun VGTV-Xtremewas one of the first explorations into using robots for searchand rescue purposes and effectively fulfilled the ultimatepurpose of providing responders with more information aboutthe environment.Figure 3. iRobot PackBot [ 8 ]The iRobot PackBot, shown in figure 3, was designed formilitary personnel in high-threat battlefield scenarios and canbe used for surveillance and reconnaissance, bomb disposal,vehicle inspection, and various other dangerous missions. TheiRobot PackBot is remotely controlled with a fewsemi-autonomous features and has dimensions of 70x50x20cm. Purchasing an iRobot PackBot costs about 100,000 to200,000 USD [ 9 ]. The iRobot PackBot’s search-and-rescuefeatures are similar to IntelliSAR’s, but the PackBot is morefocused on military utility and robustness and reliability of therobot. The main drawback is the extremely high cost for eachunit.Compared with other modern search and rescue robots,IntelliSAR's biggest advantage is its low cost. IntelliSAR ishighly practical and cost-effective. The main service ofIntelliSAR is search and rescue. IntelliSAR is suitable for thisrole with its accurate object detection and robustness and wasnot designed with military use in mind. The use of anultrasonic sensor instead of a Lidar sensor for autonomousnavigation helps save costs without drastically reducing thereliability of the autonomous navigation. Table 1 shows acomparison between IntelliSAR and the other threeaforementioned search and rescue robots.Figure 2. iRap Robot [ 7 ]2

RadioObjectDetectionPersonN/AHazmat/QR StereoCamera,LidarTargetAudienceSearch tLow( 500)High( 10000)High( 30000)Very high( 100000 )mounted on a servo near the front of the robot. Thetemperature sensor is mounted towards the middle of thechassis, near the 4WD expansion board. The USB acceleratoris mounted at the back of the chassis, beneath the upperplatform. The camera is attached to a gimbal system made upof two servos on the upper platform of the chassis. The robot’streads are rotated by two motors; one on each side at the rearof the chassis. The battery pack is mounted on the bottom sideof the chassis.Figure 5 shows the data flow throughout IntelliSAR’soverall system.Table 1. Comparison of IntelliSAR and other Search andRescue RobotsIII.D ESIGNThis section discusses the details of IntelliSAR’s designand implementation.A. System OverviewIntelliSAR consists of the following components:Yahboom G1 robot chassis [ 10 ], Raspberry Pi 4B [ 11 ], 4WDexpansion board [ 12 ], Yahboom horizontal ultrasonic distancesensor [ 13 ], DHT11 temperature sensor [ 14 ], Coral USBaccelerator [ 15 ], MakerFocus Raspberry Pi camera [ 16 ],Yahboom 370 motors [ 17 ], SG90 servos [ 18 ], and batterypack. Figure 4 shows the built IntelliSAR system with labeledcomponents.Figure 4. IntelliSAR with Labeled ComponentsThe aluminum alloy chassis securely houses the individualcomponents in their designated locations. The Raspberry Piand 4WD expansion board are both mounted beneath theupper platform on the lower platform. The ultrasonic sensor isFigure 5. Block and Data Flow DiagramThe on-board Raspberry Pi 4B is the brain of the systemand is responsible for hosting the web server, interfacing withthe motor controller and camera controller on the 4WDexpansion board, processing all the sensor data, andperforming object detection. The Flask web server allows aremote operator to send control instructions to the robot andview the processed sensor data and live video feed. The robotand the remote browser are connected by Wi-Fi or mobilehotspot, and all data transfer is through the http protocol. TheRaspberry Pi sends signals to the motor controller to controlthe direction and speed of the wheel motors and sends signalsto the camera controller to rotate the servos that make up thecamera gimbal system. The raw distance data from theultrasonic sensor is captured and calculated by the RaspberryPi for use in the semi-autonomous navigation feature. The rawtemperature data is processed and converted into a readableformat by the Raspberry Pi so that it can be displayed on theweb application. The camera sends the raw image data to theRaspberry Pi via the camera bus. The image data is firstprocessed by the object detection component of theapplication, and then the output H.264 video feed with theresulting detections is sent to the web server. The USBaccelerator is connected to the Raspberry Pi’s USB port andhelps with the processing and computations needed for theobject detection.Table 2 shows the specifications of the IntelliSAR system.3

SpecificationValueWeight7 lbDimensions256 x 183 x 213 mmBattery3.7v 18650 Battery x 3Battery Life 120 minControl Distance50 m indoors, 100 m outdoorsCameraNight vision, 5MP, 1080pTemperature MeasurementRange0 50 Temperature MeasurementAccuracy 2 Speed Range0.7 6.5 km/hCamera RotationHorizontal: 0 180 Vertical: 45 180 Obstacle Detection Range2 500 cmObstacle Detection Accuracy0.3 cmObject Detection Range6 m (best case scenario)Video Stream w/ ObjectDetection Frame RateH.264 640x480 @ 30FPSpeople/victims up to 6 meters away in the best case scenario ofnon-blurry image and clearly distinguishable human form. Theobject detection enabled video stream shown on the webapplication is 640x480 resolution and runs at approximately30 FPS. Our specification analysis helps us to better evaluatethe performance of IntelliSAR and determine areas in need ofoptimization. IntelliSAR meets all of our specification goals,but we also determined that several important specificationssuch as battery life, speed, and obstacle detection accuracycould be improved by simply purchasing upgrades to thecorresponding hardware components.B. RobotTable 2. Specifications for IntelliSARA weight of 7 lb and dimensions of 256x183x213mmallow IntelliSAR to maintain a small form factor andconvenient setup/deployment. IntelliSAR has a large enoughbattery to guarantee two hours of normal operation (analyzedin section IV, part D). In terms of remote control range,IntelliSAR is able to be controlled from up to 50 meters awayindoors or up to 100 meters away outdoors (analyzed insection IV, part E). The control range is much lower indoorsbecause the radio frequency cannot penetrate solid objectssuch as walls and floors. The camera supports night vision anda max resolution of 1920x1080, but we chose to use 640x480to lower the amount of processing power needed for the objectdetection. The DHT11’s temperature measurement range isfrom 0 to 50 degrees Celsius, with an accuracy of 2 degreesCelsius [ 14 ]. The speed at which IntelliSAR moves isadjustable, with a minimum speed of 0.7 km/h and amaximum speed of 6.5 km/h. With the ultrasonic distancesensor, IntelliSAR is able to detect obstacles up to 5 metersaway with an accuracy of 0.3 centimeters [ 13 ]. In terms ofobject detection range, IntelliSAR is able to detectThe on-board Raspberry Pi 4B is responsible for hostingthe web server, interfacing with the motor controller andcamera controller on the 4WD expansion board, processing allthe sensor data, and performing object detection.The manipulation of the robot includes controlling thewheel motors and the camera platform. Two wheel motors areused to move the robot in different directions at adjustablespeeds. The camera is mounted on a platform controlled bytwo servos in order to provide a greater viewing scope. Theuser interface is provided through the web page, throughwhich the operator is able to control the robot's movingdirection, speed, and camera rotation. The navigationinstructions are sent to the Flask web server on the RaspberryPi as http requests, and these instructions are passed to theGPIO PINs that connect to the motors and servos.The ultrasonic distance sensor is used for obstacledetection and avoidance in the IntelliSAR system. Everysecond, the application sets the PIN TRIGGER voltage from0 to 1 for 20ms, instructing the sensor to send out an ultrasonicpulse. Once the sensor receives the reflected waves, the sensorsets the PIN ECHO to 1. The application monitors thePIN ECHO and records when the reflected wave returns. Thedistance between the robot and the surrounding obstacles canT ime Sound Speed in Airbe calculated as. These calculated2distances are used to analyze the distance of the obstaclesdetected and determine the robot’s next movement direction.The navigation instructions are sent to the motor controller viathe GPIO PINS and are then converted to control signals todrive the wheel motors. Figure 6 shows the ultrasonic sensorand the aforementioned pins.Figure 6. Ultrasonic Sensor4

Similarly, the environmental sensors capture thesurrounding temperature and humidity data and transfer thedata to the Raspberry Pi through the connected GPIO PINs.The Raspberry Pi acquires the raw sensor data and interprets itto output meaningful temperature and humidity values. Oncean http client requests the web server for the environmentaldata, the web server will poll the values and display them onthe client.The Coral USB accelerator, with its on-board Edge TPU(Tensor Processing Unit), is designed to speed up inferencingon edge devices such as IntelliSAR’s on-board Raspberry Pi[ 15 ]. The Edge TPU is an ASIC (application-specificintegrated circuit) chip designed by Google that uses highlyparallelized and directly connected ALUs (arithmetic logicunits) to achieve a high computational throughput on thecalculations necessary in a neural network. Due to the efficientparallelization and removal of memory accesses, the EdgeTPU is able to do this with less power consumption and asmaller footprint [ 19 ]. It is secured to the back of the robotchassis and connected to the Raspberry Pi via USB cable. Theincorporation of the Coral USB Accelerator allows us todrastically increase our object detection video feed’s framesper second from 4 to 30 and provide a much better userexperience without having to sacrifice any functionality oraccuracy. The Coral USB accelerator is shown in figure 7.Figure 7. Coral USB AcceleratorC. Web ApplicationIn order to enable easy deployment and remove limitationssuch as installing and configuring software for robotoperation, we decided to use a web application for displayingthe data and controlling the robot. This also allows the robot tobe used over a wide range of devices, including laptops,mobile tablets, and phones. We used the Python Flask webframework for IntelliSAR’s web application because it islightweight and easy to work with. When designing thewebpage, the Bootstrap CSS framework was used to ensurecompatibility with all devices no matter the screen size. Figure8 shows the user interface that we designed for our webapplication. The live video feed with toggleable objectdetection is shown in the center of the UI. The buttons tocontrol the direction and speed of IntelliSAR’s wheel motorsare located on the left side, and the buttons to control theservos of the camera gimbal system are located on the right.At the very bottom is the button to turn on or off thesemi-autonomous navigation. The live temperature data isviewed on a separate page that can be accessed through thedropdown menu at the top right.Figure 8. Web Application UID. Semi-Autonomous NavigationThere is one ultrasonic distance sensor mounted on a servoat the front side of the robot chassis. The sensor sends outultrasonic waves that get reflected back and captured if anobstacle is encountered. The distance between the sensor andthe obstacle is calculated by the following formula:(t t )Distance received2 sent * V sonicBy monitoring the surrounding space, IntelliSAR is able toavoid obstacles and navigate semi-autonomously. Thecurrently implemented algorithm is described by the flowchartshown in figure 9. IntelliSAR continues forward until thedistance sensor detects that an obstacle is within 30centimeters. Then, IntelliSAR turns right, turns left, orreverses backwards, based on the data returned by the pivotingdistance sensor. We used a distance of 30 centimeters for ouralgorithm because that is the minimum distance at whichIntelliSAR is able to successfully turn around.Figure 9. Semi-Autonomous Navigation Flowchart5

E. Object DetectionIn order to better support post-disaster search and rescueoperations and help increase the efficiency of rescue teams,IntelliSAR provides person detection capabilities through theuse of machine learning and computer vision techniques. Thisfunction is able to reliably detect victims in various poses suchas standing straight, sitting down, and lying down and wasimplemented using a custom trained object detection model.In order to custom train our detection model in an efficientmanner, we used a technique called transfer learning. Transferlearning is a pr

Navi gati on Semi-auto/ Manual Manual Auto/ o Manual Semi-auto/ Manual Navi gati on Se ns or Ultrasonic N/A Lidar Stereo Camera, Lidar T ar ge t Audi e nc e Search and Rescue Search and Rescue Search and Rescue Military Cos t Low ( 500) High ( 10000) High ( 30000) Very high ( 100000 ) Table 1. Comparison of IntelliSAR and other Search and .

Related Documents:

Annual Security Report for 2014 Published in the Year 2015 . UMass Amherst Page 1 of 72 2014 Annual Security Report . Message from the Chief . We are pleased to present the University of Massachusetts Amherst Annual Security Report. . Amherst police officers attend the municipal police training academies located throughout the Commonwealth .

1 UNIVERSITY OF MASSACHUSETTS AMHERST CMPSCI 120 Fall 2010 Lecture 23 JQuery UNIVERSITY OF MASSACHUSETTS AMHERST CMPSCI 120 Fall 2010 Announcements HW#8 posted, due 12/3 HW#9 posted, due 12/10

Your Name Here 0000 Keefe Campus Center ! Amherst College Amherst, MA 01002 soandso@amherst.edu ! (617) xxx-xxxx EDUCATION Amher

Adams Animal Hospital Athol MA Services and Supplies Adams Donut Shop Greenfield MA Food and Beverage Adam's Donuts Greenfield MA Food and Beverage Adam's Hometown Market ‐ Monson Monson MA Supermarkets and Groceries . Amherst Farm Winery Amherst MA Beer, Wine and Spirits Amherst Farmers Market Amherst MA Food and Beverage

Nov 09, 2016 · 1717 K STREET, NW, SUITE 1010 WASHINGTON, DC 20006 P: 202.778.3100 WWW.DELTAASSOCIATES.COM TOWN OF AMHERST ECONOMIC STUDY Amherst, New York November 9, 2016 PREPARED BY DELTA ASSOCIATES FOR: Mr. Eric Gillert TOWN OF AMHERST PLANNING DEPARTMENT 5583 MAI

Web Hosting at UMass Amherst UMass Amherst Information Technology .

HCC’s honors college program coordinator who helped forge a partnership with the University of Massachusetts Amherst (UMass Amherst) Commonwealth Honors College, a fitting destination for HCC honors students who transfer to UMass Amherst. President Messner and his staff also infuse HCC’s

STM32 32-bit Cortex -M MCUs Releasing your creativity . What does a developer want in an MCU? 2 Software libraries Cost sensitive Advanced peripherals Scalable device portfolio Rich choice of tools Leading edge core Ultra-low-power . STM32 platform key benefits More than 450 compatible devices Releasing your creativity 3 . STM32 a comprehensive platform Flash size (bytes) Select your fit .