Human Assisted Capture-The-Flag In An Urban Environment

1y ago
2 Views
1 Downloads
736.85 KB
6 Pages
Last View : 2d ago
Last Download : 3m ago
Upload by : Gia Hauser
Transcription

Human Assisted Capture-The-Flagin an Urban EnvironmentMatthew A. BlakeGerrit A. SorensenJames K. ArchibaldRandal W. BeardDepartment of Electrical and Computer EngineeringBrigham Young UniversityProvo, Utah 84602Abstract— This paper describes a new multi-agent testbeddeveloped to support research in multi-agent systems that incorporate both autonomous operation and human input. Teamscompete in a modified version of the game of capture-the-flagin a maze-like environment of arbitrary complexity. Each teamconsists of a number of mobile robots, a human commander, anda UAV that provides essential position information. The makeupof each team, the motion constraints, and the nature of the gamepresent a significant challenge to any multi-agent coordinationscheme. This paper describes the rules of the game and thehardware and software infrastructure we have developed to playit. The paper further describes some of our work in devising aneffective technique for planning paths through the maze for theground-based robots.I. I NTRODUCTIONAmong researchers in robotics there is increasing interest inusing teams of robots to provide solutions in many applicationdomains. To expedite the development of the control and coordination strategies required for complex multi-agent systems,researchers have adopted a variety of testbed problems, ofwhich competitive games for teams of mobile robots are wellknown examples. Competitions using real robots advance theoverall state of the art because successful teams are likely to bebased on effective coordination techniques that can be appliedto other multi-robot applications with real-time constraints.To support our research in adjustable autonomy, coordinating autonomous robots with human directives, we hopedto identify an existing competition that met a short list ofessential requirements. First and foremost, input from a human manager had to be allowed. Some potentially hazardousapplications will always have a human in the command loopto address reliability, safety, and legal concerns. Second, wewanted an application that required the collaborative efforts ofheterogeneous teams of robots, with the resulting complexityin managing multi-agent interactions and coordination. Third,the operating environment needed to be both dynamicallyvarying and configurable using static obstacles to model various urban or indoor settings.Despite our interest in and experience with robot soccer [1],we realized it did not meet our criteria for multiple reasons:This work was funded by DARPA grant NBCH1020013the rules of the most popular leagues require fully autonomousoperation, the playing field does not contain arbitrary obstacles, and the robots on each team usually have nearly identicalcapabilities [2]. Existing search and rescue competitions comecloser in that they support human operators and provideobstacle-rich environments, but their operational environmentshave little if any dynamic variation, and solutions do notrequire different kinds of robots on each team [3].In the absence of a standard competition that met ourcriteria, we elected to define our own multi-robot game andto create the infrastructure required to play it using our labfacilities. We created a version of capture-the-flag in whichtwo teams play against each other in an easily reconfigurablemaze environment that can represent the streets of a city or theinside of a building. The goal of each team is to retrieve theopponent’s flag and bring it back to base while preventing theopposing team from doing the same. Each team has a humanmanager, mobile robots on the ground, and a (simulated)UAV flying over the playing area that collects vital positioninformation for the team. The capabilities of the robots aresuch that a team can win only if its robots explicitly communicate and collaborate with each other. Moreover, robots cancommunicate directly with each other, with the UAV, or withthe human operator only if both are within a specified distanceof each other. Indirect communication through another robot isallowed provided that each link meets the maximum distanceconstraint.After our project was underway, we were intrigued todiscover other research efforts based on capture the flag.Researchers at the California Institute of Technology andCornell University have created a competition called RoboFlagthat is loosely based on a combination of capture the flagand paintball [4], [5]. Teams have 6-10 robots and a humanoperator, and they attempt to bring their opponent’s flag backto their Home Zone. Robots in enemy territory can be taggedby shooting them with small balls, a number of which arerandomly distributed on the field at the beginning of eachgame. Although the rules and regulations of RoboFlag continue to evolve, all software required to run the game has beendeveloped and is freely distributed [6]. Like our competition,RoboFlag is a very dynamic game and includes humans in the

loop. The major differences between the two games are thatRoboFlag does not take place among configurable obstacles,that RoboFlag teams consist of essentially identical robots, andthat our version does not involve the shooting or tagging ofopponents with balls.This paper describes our approach to capture-the-flag indetail. Section II describes the rules of the game from thepoint of view of a competing team. Section III describes thesystem infrastructure, including hardware and software, thatwe have assembled in order to play the game. Section IVdescribes some of our experiences in playing the game as wellas path-planning research conducted using the testbed. Finally,we offer conclusions and discuss future work in Section V.to the X or Y axis on the playing field. Simple or complexmazes may be constructed or desired physical layouts may bemodeled. The obstacle configuration may be revised for eachgame, so a first step of any game is necessarily the creationof a map of the field as the robots move about and acquireinformation about their surroundings.II. G AME D ESCRIPTIONEach team competing in a capture-the-flag game includesthree types of agents: a human, several identical ground-basedrobots, and a UAV. During play, the human operator worksfrom a computer located in a different room. The sole sourceof information to the operator is that provided by the team’srobots. The UAV can clearly see the portion of the field itis flying over, but it cannot directly affect anything on theplaying field. The robots on the ground can pick up the flagand tag opposing robots, but they lack the ability to detectthe flat disk that represents a flag. The game reflects manyreal-world situations in which robots have a limited field ofview and must pool their knowledge to construct a meaningfulrepresentation of the global state.To successfully play the game, a team must coordinate theactions of the UAV with the robots in order to create a mapof the maze, identify flag locations, track and defend againstattacking robots, and exploit vulnerabilities in an opponent’sdefensive position. Equally important, an appropriate interfacemust be provided to the human manager that effectively conveys all known state information and that allows the managerto issue directives to the team. Since time constraints do notpermit the direct low-level control of all robots by a singleperson, the commands issued by the manager must consistof higher level directives that assume reliable, autonomousbehavior of the robots at lower levels. A large part of thechallenge for any competing team is to define a command setfor the robots and to create the code making the agents smartenough to follow the directives reliably.The playing field is divided into three sections, consistingof a home side for each team and a neutral zone separatingthem. Each team has one flag and several decoys. All lookidentical to the UAV, but the ground robots can distinguishthem because they are only able to pick up the opponent’sactual flag. At the start of each game, the referee places flagsand decoys in each team’s home territory. A game ends whenone team succeeds in bringing the opponent’s flag across theneutral zone onto its home side. A single competitive matchbetween teams can consist of any desired number of games.The field can be covered with obstacles organized as desired. In the current version of the game (as can be seen inFigure 1, each obstacle is a linear segment placed parallelFig. 1.The Capture-the-Flag FieldThe next subsections provide additional details about eachof the three player types on each team.1) The UAV: The single UAV on each team gives anoverhead view of the section of the field it is currently over.The UAV can clearly see everything on the ground, includingrobots, obstacles, flags, and decoys, although the latter twoappear identical from above. The team affiliation and identityof each robot is determined by the colors on its top. Each UAVhas a fixed velocity and a maximum absolute angular velocity.In its flight, a UAV is not constrained by the walls or the endsof the field. The only way the UAV interacts with other partsof the game is in communicating the coordinates of what ithas seen to its teammates. Given the supporting infrastructure,communicating with team members is as simple as making afunction call. For practical reasons, the UAVs are simulated.2) The Ground Robots: Each team has a number (usuallyat least three) of omni-directional ground robots. During play,the robots must traverse the maze while avoiding obstacles andobeying the rules of play. Each robot has sixteen (simulated)evenly-spaced sonars around its perimeter that can be usedto avoid obstacles or to detect enemy robots. A robot fromteam A can tag an opposing robot in team A’s home territoryby coming within a certain distance of it and issuing a tagcommand. Once tagged, a robot must drop any flag it holdsand return to the far baseline of its home section before itcan again participate in the game. Since they have no vision,ground robots must rely on the UAV to identify possible flaglocations and to pass on those coordinates to them. (Eachrobot has the equivalent of GPS and knows its coordinates onthe field.) If a robot is very close to the opponent’s flag andpositioned correctly, it can pick it up by issuing the appropriatecommand. Flags are currently simulated, but work is underwayto implement them in hardware.3) The Human: Conceptually, the human commander foreach team is located within a base station centered on thebaseline of its home territory. (Physically, the operator is

actually in another room.) In general, ground robots and theUAV are able to send information from sensors and fromtheir internal state to the base station, and the base stationin turn sends back commands for the robots to execute.Communication with a robot on the ground is possible onlyif the robot is within the maximum communication distance,or if there are intermediate robots positioned so that each linkis sufficiently short. There are no limitations on the actionsthe human can perform other than those imposed by the realtime operation and the communication constraints. One of thegoals of our research is to identify different human interfacetechniques and evaluate how effectively they let the singlecommander direct the actions of the team.III. S YSTEM I NFRASTRUCTURETeam 1Team 2UAV ModuleBase StationBase StationUAVUAVAgent 1Agent 1RefereeAgent 2Agent 2Agent 3Agent 3Control ServerAgent 4.Agent 4Agent NHardwareFig. 2.SimulatorAgent NCapture-the-flag system architectureAs we implemented the capture-the-flag system, we tried tomake it as flexible as possible so that it could be used for avariety of robotics experiments, including studies unrelated toour game. We elected to make the system distributed so that itwould be more scalable and so that it would more accuratelymodel true multi-robot systems that require explicit communication and coordination between agents. From the outset,we thought it essential to create a simulation environmentthat would allow the development, testing, and debugging ofagent code, and this has proven to be very valuable. Code thatworks on the current simulator typically works without anymodification when run on the hardware robots.The architecture of our infrastructure for capture-the-flagis shown in Figure 2. Each agent is represented by a separateblock and implemented as a separate Linux process. All agentscommunicate through the “referee” block which enforces therules of the game. The referee also connects to the UAVmodules which handle the simulation of the UAVs on eachteam. The referee also connects to the control server tocontrol either hardware or simulated robots. The control serverconverts robot commands to commands that are sent to theappropriate robots. The control server sends back importantinformation to the referee, including the position, orientation,and simulated sonar values for each robot. The referee usesthis position information to enforce the rules of the game,including tagging opposing robots and picking up flags.A. RefereeThe referee performs several important functions in thetestbed. As the diagram shows, it acts as the system hubwith a connection to all other modules. The referee has theresponsibility of enforcing all game rules and communicationconstraints, so it monitors all robot commands and all interagent communication. This allows the referee, for example, todetermine if a robot has been tagged and, if it has, to force itto drop its flag and move back to its home territory. Similarly,communication sent from one robot to another that is out ofrange can simply be deleted. The referee is also responsiblefor detecting when one team has won the game.Since the referee is a connection hub for all modules in thetestbed, it is a convenient module from which to manage thegame and testbed. To make that possible, we have constructeda graphical user interface (GUI) for the referee that displays avariety of useful information about the system state. For example, the game administrator can zoom in or out of a displayof the playing field that includes the location and status of allplayers – information extremely useful for debugging systemcomponents. A wide variety of infrastructure parameters canbe modified through the GUI. For example, the administratormay start, stop, reset, or restart a game. Flags, robots, and evenobstacles may be moved in simulation; map files depicting thelayout of obstacles may be created, edited, loaded, and saved.Finally, the set of rules and communication constraints to beenforced may be modified, and the mode of operation can beswitched from simulation to hardware or vice versa.The functionality provided by the referee’s GUI makes thetestbed much easier to use. It eliminates the need for manycommand line arguments at system startup, and for restartingthe referee every time some aspect of the system has to bechanged.B. UAV ModuleThe UAV module handles all movement and vision generation for both UAVs. The UAV module receives the location ofall the robots and flags from the referee, and it generates a listof obstacles from the map file. Using this data, it generates alist of what each UAV can see from its current location and, ateach time step, sends the object list to the UAV along with thatUAV’s current location. The UAV code for each team changesthe motion of the UAV by sending a desired offset from itscurrent heading. The velocity of each UAV is considered tobe constant. The UAV module also sends the location of eachUAV back to the referee.C. SimulatorThe simulator provides a real-time software simulation ofthe hardware environment. The simulator allows robot routinesto be written and tested in simulation and then used inhardware with little or no modification. At each time step,the simulator computes the new location of each robot anddetects any collisions caused by that motion. If no collisionsoccurred, the simulator moves the robot. Otherwise, the robotis left in its old location and its velocity is set to zero. This

method does not perfectly model the actual hardware, but itis reasonably close and it has the advantage of being fast,especially with large numbers of robots or obstacles.D. Hardwarethe robots.For low-level control, each robot uses a “MagiccBoard”,a custom development board that has a 29 MHZ Rabbitmicroprocessor. An array of desired wheel speeds (all wheels,all robots) is transmitted from the computer running the controlserver to the robots using a 900 MHz wireless serial modem.Each robot on the team extracts its information from the arrayand regulates each of its three wheels to the desired velocity.E. Control ServerFig. 3.A Capture-the-Flag RobotShown in Figure 1, the capture-the-flag field is square andabout 4.5 meters on a side. Obstacles are represented by short,black pieces of wood, square in cross section and fastened tothe carpet flooring using Velcro. An overhead camera is usedto determine the robot positions. As shown in Figure 3, therobots are three wheeled omni-directional robots.The robot position is calculated by a vision server whichprocesses frames from the overhead camera. Any processwanting to get robot position information can communicatewith the vision server to obtain that information. Each robothas a square top that is divided into four triangle shapedsections that meet at the center of the robot. The left andright sections are black and the front and back sections are alight color. The front of the top has a black strip so that theorientation of the robot can be determined. At the center ofthe robot is a colored square that indicates the robot’s team.Different light colored sections are used by each robot on ateam to uniquely identify that robot. The robot position isdetermined by finding points along the lines that separate thelight and dark regions of the top and using a least squaresalgorithm to find where those lines intersect. The lines betweendark and light regions are easy to accurately detect with littlenoise, and the least squares method further filters noise. Theunique team color provides a fast way of finding rough robotlocations for each team.Our robots were originally designed to meet the RoboCupsmall-size league specifications [2]; with the exception of thetops, the robots were not modified for use in this testbed.Our three-wheeled omni-directional design was based on OhioUniversity’s Phase IIIB design as described in [7]; the omnidirectional movement greatly increases the maneuverability ofThe control server is a separate process that provides asingle interface to both hardware and simulated robots. Itprovides robot position and orientation from either the visionserver or the simulator as well as computing simulated sonarvalues. Separating the low-level control for all robots intoa separate process helps ensure that commands are sentout thirty times per second, improving system stability. Dueto the characteristics of the Linux scheduler, timing cannotbe guaranteed with multiple processes running on a singlemachine. Thus, stable robot operation was not achieved untilthe control process was isolated on a dedicated machine.Individual robot processes monitor their own position andcan send out waypoints to the control server one at a time. Aninterruption in the execution of a robot process can result inthe corresponding physical robot pausing at a waypoint untilthe next one is sent – a stable behavior. Having a separatecontrol server allows a short time step for better control whena connected process is only able to send a desired robot pathor destination at a much slower rate. Additionally, it allows arobot interface with higher level commands, such as goingto a point or following a path. Obstacle avoidance is alsoimplemented in the control server. Anything that is not specificto the game of capture-the-flag can be implemented here sothat it is available for other experiments.F. Multi-agent Communication Framework (MCF)To handle the communication between all the differentprocesses in our architecture, we used the “Multi-agent Communication Framework” (MCF) developed in our lab. MCFallows the different components to connect without knowledgeof where the other parts of the system are, greatly simplifying system communication. MCF also provides locationindependent message passing between the system components.This framework reduces the overhead of testing, simulation,and prototyping by providing communication abstractions thathide implementation details, such as where an agent’s codeis running and how to communicate with it. Developingcapture-the-flag infrastructure code with MCF simplified thecommunication code considerably, since the framework provides a communication abstraction that takes care of all thebothersome details.MCF is implemented using a client-server model. Eachcomputer in the lab has an MCF server process running; theprocess constitutes a single contact point for each local agentclient. As a group, the servers form a dynamic communicationbackbone for the system, informing each other of important

changes as well as routing MCF messages to their destinations.Clients connect to the server on the local machine. Throughthe server, clients can obtain lists of the IDs of all clients inthe system that match a particular client type. A client cancommunicate with any other agent by sending a message anddestination ID to the local server. MCF has been designedto allow a client to be implemented with as little systeminformation as possible. A client need only know how toconnect to the local server to take advantage of the fullfunctionality of MCF. The consistent manner in which agentsconnect and interact with MCF allows the communicationcomponent to be separated from the rest of the agent code.The modular design of the testbed made possible by MCFresults in great flexibility and scalability. Although the gamerules limit each team to a single human operator, modificationsto the infrastructure to accommodate multiple operators wouldbe trivial. Switching back and forth between simulation andhardware is painless, supporting fast prototyping and testingof robot behaviors. In addition to playing the full game, itis easy to run experiments exploring specific scenarios, suchas a single team of robots mapping a complicated maze, orperhaps a group of robots trying to trap an opposing robotwithin a region of the maze. Experiments can easily combinehardware robots with simulated robots interacting in the sameworld-space, and the current MCF platform can easily supporttens of robots in a single scenario if desired.IV. S YSTEM I NTERFACE AND U SAGEOur initial user interface was limited. It presented the userwith a global view of the field, allowing the user to selecta robot by clicking on it. Once a robot was selected, theuser could click on a location in the map, which would issuea move-to-location command to the robot. In addition, theuser could issue robot velocity commands as well as variousgame commands. The simple interface did not automate pathplanning or collision avoidance routines. Therefore, the userwas required to specify waypoints that would guide the robotaround obstacles.The challenges of using this simple interface motivatedus to develop automated path planning routines. Since allpath planning is based on some graph representation of theworld, the algorithm used to generate the underlying graphlimits the quality of paths that can be constructed. Duringinitial experimentation, we identified four requirements that agraph must meet to provide the functionality that we required.First and most importantly, a useful graph should accountfor obstacle dimensions in a way that creates safe “bufferzones” around all obstacles, but it must do this in a waythat will not prevent access to any reachable areas of theworld. Paths created from such a graph can therefore haveany desired reachable point on the field as their destination,and they will always maintain a minimum distance from allobstacles. Second, the algorithm should scale well with respectboth to map size and the number of obstacles present. Our labresources limit us to running the actual hardware robots in arelatively small area, but in simulation our experiments involvemuch larger maps that produce much larger graphs. Third, thegraph should not require an excessive amount of memory tostore. In a distributed environment, each agent must have itsown copy of the graph, so total memory requirements are animportant consideration. Finally, a useful graph will lend itselfto implementation using a straightforward data structure thatother modules in the agent code besides the path-planner canuse if desired.The first graph generation method that we implemented wasto divide the field into a grid array. Grid cells containing anobstacle were assigned infinite cost and grid elements close towalls received high cost to create a buffer around the obstacles.The resulting array can be searched using any standard searchalgorithm. Through experimentation, we found that for ourtestbed, a grid size of 10cm resulted in acceptable paths.While the gridding approach is straightforward to implement,it requires excessive memory: 800K for a 4.5 4.5 meter field.The next graph generation method that we implemented wasthe Voronoi algorithm [8], [9] which has been used extensivelyat BYU [10], [11]. The obstacles were approximated bypolygons and the vertices of the polygons were used togenerate the Voronoi graphs. Graph edges passing throughthe obstacles were pruned before searching. This techniquehas several problems: first, the paths are often very close toobstacles, second the paths are not very smooth, and third,valid paths are sometimes pruned. The resulting paths are oftenfar from optimal. Figure 4 shows example paths generatedusing the grid array and the Voronoi techniques.(a)(b)Fig. 4. (a) A path generated using a grid array. (b) A path generated usingthe Voronoi algorithm.As an alternative, we designed a new technique for pathgeneration that was particularly designed for urban-like environments. The first step of this algorithm is to create apolygon region around each obstacle with a buffer zone largeenough to keep the robot from hitting the obstacle. If anysection of the polygon is inside the region surrounding anotherobstacle, the points inside the polygon are removed and apoint is added at each intersection point that joins the twopolygons. Each polygon line is then extended until it runsinto another polygon. These new lines become the connectionsbetween the nodes in the graph. A point is then placed at

each of the end points of these lines and optionally whereany lines overlap. These points becomes the nodes in thegraph. To reduce the graph size, close nodes can be joined.This method produced paths that were safer than both othermethods without significantly increasing the path length.This safe path algorithm was developed and tested usingour simulation framework. We have also tested the algorithm extensively using hardware robots. These hardware testsconfirmed that the safe-path graph’s ability to maintain ahigher minimum distance from obstacles creates a significantadvantage for robots. This ability to reduce collisions reducesthe chances that a robot will become non-functional dueto a collision and reduces the amount of time a user isinconvenienced by robots that have become stuck on obstacles.It also reduces wear and tear on the robots.V. C ONCLUSIONThis paper has presented a capture-the-flag testbed that iswell suited to supporting research in multi-agent coordination and human-robot interaction. Our version of capturethe-flag requires collaboration between humans and heterogeneous robots in an environment that includes unpredictableopponents and obstacles that can be configured to representarbitrary urban or indoor settings. The testbed has beendesigned to allow algorithms developed in simulation to beimplemented directly on robotic hardware with little if anysoftware changes required. The infrastructure is stable andincludes an innovative multi-agent communication frameworkthat simplifies software development.Future research efforts using the testbed will focus on hierarchical team management structures, adjustable autonomy,coordinated path planning, formation control, heterogeneousmulti-agent coordination, and interfaces for conducting humanfactors experiments.ACKNOWLEDGEMENTSThe authors gratefully acknowledge the many valuablediscussions on human-robot interactions with Mike Goodrich,Dan Olsen, and Tim McLain, the significant testbed development efforts of Jeff Anderson, and the interface developmentwork by Joshua Johanson and Curtis Nielson.R EFERENCESFig. 5.Human-Robot Interactive Path PlanningWith this path planning technique in place, we redesignedthe human interface as shown in Figure 5. With this interface,the user can interact with the path planner if desired. The useris able to specify a set of waypoints for the robot to follow.As the waypoints are specified, the system plans feasible pathsbetween the waypoints and displays the information to theuser. The user can then move the waypoints or insert additionalwaypoints on the path. As the user inserts waypoints, newpaths are generated. This allows the user to give detaileddirection if desired, while facilitating human neglect tolerance,or ensuring that the agent can go without human input for atime without having its performance fall below a specifiedthreshold.An additional feature that was added to the human interfacewas the ability to create patrol paths for the robot by settingthe location of the end of the path at the same point asthe beginning. When this is done, the robot will continuefollowing the path until another command is given. The usercan even modify the patrol path while the robot is in action.Experiments on the interface effectiveness similar to thosefound in [12], [13] are being conducted to support on-goingstudies on human n

The Capture-the-Flag Field The next subsections provide additional details about each of the three player types on each team. 1) The UAV: The single UAV on each team gives an overhead view of the section of the field it is currently over. The UAV can clearly see everything on the ground, including

Related Documents:

May 02, 2018 · D. Program Evaluation ͟The organization has provided a description of the framework for how each program will be evaluated. The framework should include all the elements below: ͟The evaluation methods are cost-effective for the organization ͟Quantitative and qualitative data is being collected (at Basics tier, data collection must have begun)

Silat is a combative art of self-defense and survival rooted from Matay archipelago. It was traced at thé early of Langkasuka Kingdom (2nd century CE) till thé reign of Melaka (Malaysia) Sultanate era (13th century). Silat has now evolved to become part of social culture and tradition with thé appearance of a fine physical and spiritual .

On an exceptional basis, Member States may request UNESCO to provide thé candidates with access to thé platform so they can complète thé form by themselves. Thèse requests must be addressed to esd rize unesco. or by 15 A ril 2021 UNESCO will provide thé nomineewith accessto thé platform via their émail address.

̶The leading indicator of employee engagement is based on the quality of the relationship between employee and supervisor Empower your managers! ̶Help them understand the impact on the organization ̶Share important changes, plan options, tasks, and deadlines ̶Provide key messages and talking points ̶Prepare them to answer employee questions

Dr. Sunita Bharatwal** Dr. Pawan Garga*** Abstract Customer satisfaction is derived from thè functionalities and values, a product or Service can provide. The current study aims to segregate thè dimensions of ordine Service quality and gather insights on its impact on web shopping. The trends of purchases have

Chính Văn.- Còn đức Thế tôn thì tuệ giác cực kỳ trong sạch 8: hiện hành bất nhị 9, đạt đến vô tướng 10, đứng vào chỗ đứng của các đức Thế tôn 11, thể hiện tính bình đẳng của các Ngài, đến chỗ không còn chướng ngại 12, giáo pháp không thể khuynh đảo, tâm thức không bị cản trở, cái được

Juries when making decisions about a Blue Flag beach candidate. During the Blue Flag season the flag must fly at the beach. The flag is both a symbol of the programme being run at the beach but also an indication of compliance. The flag may either be flown 24 hours a day during the Blue Flag season, or only during the hours when the beach meets .

During the Blue Flag season the flag must fly at the beach. The flag is both a symbol that the beach participates in the Programme but also an indication of compliance with the criteria. The flag may either be flown 24 hours a day during the Blue Flag season, or only during the hours when the beach meets all the Blue Flag criteria.