A Taxonomy Of Robot Deception And Its Benefits In HRI

1y ago
4 Views
1 Downloads
1.07 MB
8 Pages
Last View : 1m ago
Last Download : 3m ago
Upload by : Javier Atchley
Transcription

A Taxonomy of Robot Deceptionand its Benefits in HRIJaeeun ShimRonald C. ArkinSchool of Electrical and Computer EngineeringGeorgia Institute of TechnologyAtlanta, USAjaeeun.shim@gatech.eduSchool of Interactive ComputingGeorgia Institute of TechnologyAtlanta, USAarkin@cc.gatech.eduAbstract—Deception is a common and essential behavior inhumans. Since human beings gain many advantages from deceptive capabilities, we can also assume that robotic deception canprovide benefits in several ways. Particularly, the use of roboticdeception in human-robot interaction contexts is becoming animportant and interesting research question. Despite its importance, very little research on robot deception has been conducted.Furthermore, no basic metrics or definitions of robot deceptionhave been proposed yet. In this paper, we review the previouswork on deception in various fields including psychology, biology,and robotics and will propose a novel way to define a taxonomyof robot deception. In addition, we will introduce an interestingresearch question of robot deception in HRI contexts and discusspotential approaches.Index Terms—Robot Deception; Robot Behavior; HumanRobot Interaction; Robot EthicsI. I NTRODUCTIONDeception is a common behavior not only in humans butalso in animals. Various biological research findings demonstrate that animals act deceptively in several ways to enhancetheir chances of survival. In human interaction, deception isubiquitous, occurs frequently during peoples development andis present in personal relationships, sports, culture, and evenwar. Thus, it is fair to assume that, similar to animals andhumans, robots can also benefit from the appropriate use ofdeceptive behavior.Since the use of deceptive capabilities in robotics featuresmany potential benefits, it is becoming an important andinteresting research question. Despite its importance, however,very little research on robot deception has been conducted untilrecently. Much of the current research on robot deception [1],[2], [3] focuses on applications and not on fundamental theory,such as the definition of a taxonomy for robot deception. Formore details, we review previous research on robot deceptionin Section IIWe contend that defining robot deception and establishinga taxonomy for it are important as a foundation for furtherrobotics research on the subject, and herein we present sucha taxonomy. To accomplish this, in Section III we carefullyreview different ways of defining deception from the fieldsof psychology, biology, and the military, and survey previousresearch on robot deception.Since the use of social robots in everyday life is increasing,we particularly concentrate on robot deception in HumanRobot Interaction (HRI) contexts. We hypothesize that a robot,in order to be intelligent and interactive, should have deceptivecapabilities that may benefit not only the robot itself but alsoits deceived human partner, in some cases to the detrimentof the deceiver (the robot). In Section IV, we focus onrobot deception, specifically in HRI contexts, and present aninteresting research question and potential approaches to itsanswer.In sum, this paper has two main goals: 1) to introduce ataxonomy of robot deception, and 2) to provide a researchquestion of robot deception in HRI. First, we present anovel way to define robot deception and then characterizea taxonomy of robot deception. Second, we focus on robotdeception in HRI contexts, which leads to the followingresearch question:Can a robots deceptive behaviors benefit a deceived humanpartner in HRI contexts?In Section IV , some initial ideas to answer this researchquestion are also presented. Even though robot deception canprovide several advantages to humans, it is arguable whether itis morally acceptable to deceive humans at all in HRI contexts.We consider this ethical issue and introduce some perspectivesto approach robot deception problems in Section V.II. P REVIOUS R ESEARCHEndowing robots with the capacity for deception has significant potential utility [4], similar to its use in humans andanimals. Clearly, deceptive behaviors are useful in the militarydomain [5], [6]. Sun Tzu stated in The Art of War, “Allwarfare is based on deception.” Military robots capable ofdeception could mislead opponents in a variety of ways. Asboth individual and teams of robots become more prevalentin the militarys future [7], [8], robotic deception can providenew advantages apart from the more traditional one of forcemultiplication.In other areas, such as search and rescue or healthcare,deceptive robots might also add value, for example, for calmingvictims or patients when required for their own protection.Conceivably, even in the field of educational robots, the

deceptive behavior of a robot teacher may potentially play arole in improving human learning efficiency.Despite the ubiquity in nature and the potential benefits ofdeception, very few studies have been done on robotics to date.One interesting application in robot deception is the camouflage robot, developed at Harvard University [1]. Camouflage isa widely used deception mechanism in animals and militaries.Inspired by these real-world usages, the researchers at Harvarddeveloped this soft robot, which can automatically change thecolor of body to match its environment.Motion camouflage has also been studied for robot systems.Unlike the previous type of camouflage, motion camouflage isa behavioral deception capability observed in dragonflies. Byfollowing indirect trajectories, dragonflies can deceptively approach as if they were remaining stationary from the perspective of the prey. Carey et al. [9] developed an optimal controlmechanism to generate these motion camouflage trajectoriesand verified it with simulation results. For real robot systems,more recent research in [10] proposed new motion camouflagetechniques that are applicable to unicycle robots.Floreano’s research group [2] demonstrated robots evolvingdeceptive strategies in an evolutionary manner, learning toprotect energy sources. Their work illustrates the ties betweenbiology, evolution, and signal communication and does so ona robotic platform. They showed that cooperative communication evolves when robot colonies consist of genetically similarindividuals. In contrast, when the robot colonies were dissimilar, some of the robots evolved deceptive communicationsignals.Wagner and Arkin [4] used interdependence theory andgame theory to develop algorithms that allow a robot todetermine both when and how it should deceive others. Morerecent work at Georgia Tech is exploring the role of deceptionaccording to Grafen’s dishonesty model [11] in the contextof bird mobbing behavior [12]. Another study is developingrobot’s deceptive behavior inspired by biology. It appliessquirrel’s food protection behavior to robotic systems andshows how a robot successfully uses this deception algorithmfor resource protection [3].Much research on robot deception has also been proposedin HRI contexts. Terada and Ito [13] demonstrated that arobot is able to deceive a human by producing a deceptivebehavior contrary to the human subject’s prediction. Theseresults illustrated that an unexpected change of the robot’sbehavior gave rise to an impression in the human of beingdeceived by the robot.Other research shows that robot deceptive behavior canincrease users’ engagement in robotic game domains. Workat Yale University [14] illustrated increased engagement witha cheating robot in the context of a rock-paper-scissors game.They proved greater attributions of mental state to the robotby the human players when participants played against thecheating robots. At Carnegie Mellon University [15] a studyshowed an increase of users’ engagement and enjoyment ina multi-player robotic game in the presence of a deceptiverobot referee. By declaring false information to game playersabout how much players win or lose, they observed whetherthis behavior affects a human’s general motivation and interestbased on frequency of winning, duration of playing, and so on.These results indicate that deceptive behaviors are potentiallybeneficial not only in the military domain but also in a human’severyday context.Brewer et al. shows that deception can be used in arobotic physical therapy system [16]. By giving deceptivevisual feedback on the amount of force patients currently exert,patients can perceive the amount of force lower than the actualamount. As a result, patients can add additional force and getbenefits during the rehabilitation.Recent work from the University of Tsukuba [17] showsthat a deceptive robot partner can improve the learning efficiency of children. In this study, the children participated ina learning game with a robot partner, which pretends to learnfrom children. In other words, the robot partner in this studyis a care-receiving robot, which enables children to learn byteaching [18]. The goal of this learning game is for kids todraw the shape of corresponding English words such as circle,square, and so on. The interesting part is that the robot actedas an instructor, but deliberately made mistakes and behavedas if it did not know the answer. According to the results,by showing these unknowing/unsure behaviors, the learningefficiency of the children was significantly increased. Sincerobots’ unsure/dumb behaviors can affect a humans learningefficiency, we assume that these results relate to a robot’sdeceptive capabilities. As a result, we can conclude that thisstudy provides preliminary results of the positive effects ofrobots’ deceptive behavior in HRI contexts.III. TAXONOMY OF ROBOT DECEPTIONA. Taxonomies of Deception from a Human PerspectiveIn other disciplines, researchers have developed the definitions and taxonomies of deception drawing from the fields ofpsychology, biology, military, engineering, etc. In this section,several ways to define and categorize deception in differentfields are reviewed followed by a suggested taxonomy ofdeception from a robotic perspective.Deception has been studied extensively by observing different human cases. Several ways to define and categorizedeception have been proposed already by different psychologists and philosophers. Chisholm and Freehan [19] categorizeddeception from a logical and philosophical viewpoint. Threedimensions were described for distinguishing among typesof deception such as commission-omission (the attitude ofthe deceiver; the deceiver “contributes causally toward” themark’s changes or the deceiver “allows” the mark’s changeswith respect to belief states), positive-negative (the belief stateof the mark; the deceiver makes the mark believe that falseproposition is true vs. true proposition is false), and intendedunintended (whether the deceiver changes the mark’s beliefstate or merely sustains it). From the combination of thosethree dimensions, they provided eight categories of humandeception as shown in Table I(a).

From the results of diary studies and surveys, DePaulo[20] divides deception in four different ways: content, type,referent, and reasons (Table I(b)). Subcategories of these kindsof deception are also observed and defined. Subcategories ofcontent are feelings, achievements, actions, explanations, andfacts. In the category of reasons, there are subcategories ofself-oriented and other-oriented deception. In type category,outright, exaggerations, and subtle were defined as subcategories. Also, four different referents were suggested such asliar, target, other person, and object/event.Military is one of the biggest contexts for the use of deceptive behavior. Dunnigan and Nofi [21] proposed a taxonomy ofdeception based on ways to generate the deceptive behaviorsas shown in Table I(d).Whaley [22], [23] suggested six categories of deception andgrouped them into two sets. The six categories of deceptionare masking, repackaging, dazzling, mimicking, inventing, anddecoying. These categories are grouped into dissimulation andsimulation (Table I(d)). The first three, masking, repacking anddazzling, are categorized as dissimulation (the concealmentof truth) and the others are in the simulation category (theexhibition of false).Recently, Erat and Gneezy [24] classified four types ofdeception based on their consequences: selfish black lies, spiteblack lies, pareto white lies, and altruistic white lies (TableI(c)).In cyberspace, deception happens frequently and a taxonomy of deception has been proposed by Rowe et al. [25]for this domain. They defined seven categories of cyberspacedeception based on linguistic case theory, including: space,time, participant, causality, quality, essence, and speech-act.By exploring subcategories on each case, they proposed 32types for a taxonomy of cyberspace deception (Table I(e)).Many deceptive behaviors are also observed in nonhumancases. Animal deception can be categorized into depending onits cognitive complexity [26], specifically the two categories ofunintentional and intentional animal deception (Table I(f)). Unintentional animal deception includes mimicry and camouflage.In contrast, intentional deception requires more sophisticatedbehavioral capacities such as broken-wing displays or in manynon-human primate examples such as chimpanzee communication [27].Recently, researchers in Human-Computer Interaction(HCI) defined the notion of benevolent deception, which aimsto benefit not only the developers but also the users [28]. Theyhave not proposed a taxonomy of deception, but provided newdesign principles regarding deception in HCI.FieldMethod(a)PhilosophyLogical andphilosophicalview pointswith aproposition(b)PsychologyAnalysisresults ofdiary studiesand surveys(c)Economics1Deceiver’sand mark’sconsequences1 Thechart is reproduced with permission from [24]’s Figure 1. Taxonomyof Lies on Change in Payoff.Intend-ed (I)Unintend-ed (U) (d)MilitaryRepresentingdeception ticcase UC-N-UDissimulation Masking: hiding in background Repacking: hiding as something else Dazzling: hiding by confusionSimulation Mimicking: deceiving by imitation Inventing: displaying a different reality Decoying: diverting attentionSpaceB. A Taxonomy of Robot DeceptionBased on previous efforts in this area, a taxonomy of robotdeception was developed. Similar to human and animal deception, robot deception occurs during the interactions amongrobots or between humans and robots. Therefore, analyzingthese interactions can identify the key factors to categorizeTaxonomyDirection, location-at, locfrom, loc-to, loc-through,orientationFrequency, time-at, timefrom, time-to, time-throughAgent, beneficiary, experiences, instrument, object,recipientCause, contradiction, effect,purposeAccompaniment, content,manner, material, measure,order, valueSuper type, wholeExternal precondition, internal preconditionIntentional vs. Unintentional DeceptionTABLE I: Taxonomies of Deception in Various Fields

InteractionTypeCategoriesRobot-humandeception (H)Robotnonhumandeception (N)Self-orienteddeception (S)Other-orienteddeception (O)Physical/unintentionaldeception (P)Behavioral/intentionaldeception (B)SpecificationsRobot deceives human partnersRobot deceives nonhuman objects such as other robots, animals, and so on.(a)Deception for robot’s own benefitDeception for the deceivedother’s benefitDeception through robot’s embodiments, low cognitive / behavioral complexityDeception through robot’s mental representations and behaviors, higher cognitive complexity(b)TABLE II: Three Dimensions for Robot Deception Taxonomyrobot deception. Similar to Chisholm and Freehans approach[19], we specify the salient dimensions of robot deception first,and then define a taxonomy of robot deception based on thesecharacteristics.The three dimensions of robotic deception are defined asdeception object, deception goal, and deception method (TableII). First, interaction object indicates with whom the robotinteracts with and thus tries to deceive. In this dimension,deception can be classified into the two categories of robothuman deception, and in robot-nonhuman deception.The second dimension is deception goal. This approach issimilar to the distinctions in DePaulo’s taxonomy, especiallythe reason category [20], by categorizing robot deceptionbased on the reason why a robot tries to deceive others: selforiented deception and other-oriented deception. Self-orienteddeception means that the robot’s deceptive behaviors benefitthe robot itself. In contrast, other-oriented deception occurswhen the goal of robot deception is to provide advantage tothe deceived robots or human partners, even at the robot’s ownexpense.The final dimension is deception method, which is the wayby which the robot generates deception. This dimension issimilar to the taxonomy of animal deception: intentional andunintentional deception. It includes embodiment/physical deception and mental/behavioral deception. Embodiment deception indicates deception resulting from the robot’s morphologysuch as camouflage. In mental/behavioral deception, a robotdeliberately generates intentional deceptive behaviors.From the combinations of those three dimensions, we candefine a taxonomy of robot deception as shown in Table III.Each element of the taxonomy (type) consists of a combinationof three categories, one from each dimension, providing eightdifferent types of robot deception. The table also includesexamples of each type of robot deception. As shown in thistable, N-S-P and N-O-P types do not have specific examplesin robot contexts yet. Therefore, we exclude these two typesfrom further consideration. Thus, based on the characteristicsof interactions in current robot systems, six different types ofrobot deception are defined to constitute our taxonomy of robotFig. 1: FSA for robot deception algorithms: (a) High-levelFSA: caching and patrolling behaviors of squirrel robot, (b)Sub-FSA: food patrolling (top: normal, bottom: deception)Fig. 2: Robot Experiment Layout with Two Pioneer Robotsdeception.C. Robot Deception: A Case StudyA taxonomy for robot deception was presented above bydefining salient dimensions and categories. In our previousresearch [3], we developed and evaluated a robot’s deceptivebehaviors for resource protecting strategies, which is potentially applicable in military context and inspired by biology.The patrolling strategy used by Eastern Grey Squirrels is oneinteresting example in nature regarding the possible role ofdeception [30], where they use deception to protect their foodcaches from other predators.Briefly, the squirrel spends time visiting stocked foodcaches. It was observed, however, that when a predator ispresent, the squirrel changes its patrolling behavior to spendtime visiting empty cache sites, with the apparent intent tomislead the raider into the belief that those sources are wherethe valuables are located, a diversionary tactic of sorts.Inspired by these deceptive behaviors of squirrels, we developed and implemented deception algorithms for a robot. Figure1a illustrates the high-level model of algorithms using a finitestate acceptor (FSA). After caching is complete, the robot thenbegins to move between the caching locations in order to patrolits resources. The behaviors of the robot include goal-orientedmovement, selecting places, and waiting behavior as shown

finitionDeceiving human for deceiver robot’s own benefit using physicalinteractionsDeceiving other robot or nonhuman for deceiver robot’s ownbenefit using physical interactionsDeceiving human for deceived human’s benefit using physicalinteractionsDeceiving other robot or nonhuman for deceived other’s benefitusing physical interactionsDeceiving humans for deceiver robot’s own benefit usingbehavioral interactionsDeceiving other robots or nonhumans for deceiver robot’s selfbenefit using behavioral interactionsDeceiving humans for deceived human’s benefit using behavioralinteractionsDeceiving other robots or nonhumans for deceived other’sbenefit using behavioral interactionsExamplesCamouflage robots - DARPA’s soft robot [1]N/AAndroid RobotsN/ARobot deception in HRI [13]Mobbing robot [12], Robot deception using interdependencetheory [4], Squirrel-like resource protection robot [3]Robot deception in entertainment [14], Deceptive robot learnerfor children [17], Robot referees for human game players [15]Robot Sheepdog [29]TABLE III: Robot Deception TaxonomyWith DeceptionWithout Deception-2.2188-1.1035ResultConvergenceRateTABLE IV: Robot Experiment Results with Convergence Ratein Figure 1b. Initially, the robot employs the true patrollingstrategy when the select-true-location trigger is activated. Thistrigger calculates which of the many caching locations therobot should patrol in the current step. The calculation ismade by random selection based on the transition probabilitiesamong the places. Transition probabilities are determined bythe number of cached items. In other words, if a place has moreitems, the probability of visiting that place is higher. When acompetitor robot is detected, the squirrel robot starts the falsepatrolling strategy and selects/goes to false-patrolling-locationsas shown in Figure 1b.This is a form of misdirection, where communication isdone implicitly through a behavioral change by the deceiver.We implemented this strategy in simulation [3], and showedthat these deceptive behaviors worked effectively, enablingrobots to perform better using deception than without withrespect to delaying the time of the discovery of the cache. Wealso evaluated this algorithm by applying it to real robot systemusing the experimental layout in Figure 2. Table IV illustratesthe results of experiments. As the graphs show, experimentalresults with deception converges to the predator’s maximumsuccessful pilferages more slowly than without deception.Therefore, it can be concluded that the deception algorithmleads to a robot’s better resource protection performance.This deception capability for a robot can be categorizedin terms of our taxonomy. First, the object that a robot triesto deceive is other competitor robot, which means nonhumanobjects (N). The deception happens through the robot’s behaviors by intentionally misleading the competitor robot (B).In deception goal dimension, the benefits of this deceptioncapability are protecting the deceiver’s resources longer, so thesquirrel robot obtains advantage: i.e., self-oriented deception(S). As a result, this squirrel robot deception is classified asN-S-B type in our taxonomy.Other applications of robot deception can be similarlycategorized in the taxonomy. Examples of how to categorizevarious forms of robot deception in the taxonomy are shownin Table III.IV. OTHER-ORIENTED ROBOT DECEPTION IN HRIMany researchers aim to build social robots that featureintentionality. According to Dennett [31], a high-order intentionality can be achieved by adding several different features,notably deception capability. In other words, more intentionaland autonomous social robots are possible when deceptioncapabilities are added. Therefore, research on robot deceptionis arguably an important topic in HRI studies.HRI studies generally aim to evaluate how a robot affectshuman partners, and the goal of such studies is usually toachieve effective and positive interactions between robots andhumans. Similarly, it is necessary to consider the potentialbenefits to human partners when we are dealing with robotdeception in HRI contexts. In other words, one goal of a robot’sdeceptive behaviors in HRI should be to provide advantage tothe deceived human.Earlier we saw several studies in the field of psychology thatdefined deception in different ways. In particular, DePaulo [20]defined and characterized a taxonomy for human deceptionbased on motivations such as self-oriented and other-orienteddeception. We also capture this category in the taxonomy ofrobot deception presented in the previous section and definedone dimensions based on the deception’s goal.Other-oriented robot deception should be developed andevaluated when applying deception capabilities to robots thatinteract with human partners in different HRI contexts for theirbenefit. As a result, we ask:

Can a robots deceptive behaviors benefit a deceived humanpartner in HRI contexts?The notion of other-oriented deception has been also proposed in HCI as benevolent deception [28]. However, theembodied nature of a robotic agent distinguishes it fromtraditional HCI with respect to the effect on a human actor.Therefore, it is required to develop principles of other-orientedrobot deception in HRI separately.To answer this research question, the following researchsteps are being undertaken. First, a computational model andassociated algorithms need to be developed specifically forother-oriented robot deception. The computational model mustthen be implemented and embedded on a robotic platform. Theeffects of this type of deception should be evaluated using HRIstudies that are carefully constructed and conducted to observewhether the deceived human partners could truly obtain benefitfrom a robot’s other-oriented deception capabilities.We also characterize the situational conditions pertainingto the application and utility of other-oriented deception,by grouping and characterizing relevant situations of otheroriented deception. From reviewing various situations whenother-oriented deception occurs between humans, they canbe grouped along two dimensions: 1) the time duration ofthe deception; and 2) the payoff of the mark (the deceivedperson). The time dimension ranges from one-shot to shortterm to long-term, referring to the length of time deceptionis maintained by the deceiver’s actions. The mark’s payoff iscategorized by the effect on the mark’s outcome (ranging fromhigh to low payoff).As shown in Figure 3, representative other-oriented examples in these dimensions are illustrated by their location in thistwo dimensional space. They include:1. Crisis management is a situation where the deceiver’sdeceptive behaviors or lies must have a rapid effect on the mark(short-term) perhaps in a life-threatening high payoff situation.For example, other-oriented deception in a search-and-rescuesituation may involve immediate emotional or physiologicalremediation for a victim. Lying to a mark regarding thedireness of their situation in order to calm him/her down orto increase their confidence may increase their likelihood ofsurvival in this life critical situation.2. When someone faces highly stressful situation such asa big presentation in front of huge crowd or an athletic trial,people sometimes lie to cheer up / increase their confidence tolet the speaker calm down in short-term such as “Don’t worry!You’re perfectly prepared” or “I know you can successfully dothis.”3. Quality of Life Management (QoLM) involves maintaining deception over long periods of time, again for potential lifecritical (health) situations in therapeutic treatment of serious orgenerative illness, or regarding status of long-term economicwell-being. For example, placebos may be persistently usedfor a deceived patient’s long-term benefit [32]. Long-termlying can also be used in a similar manner with the hopesof benefitting the patient.4. Sometimes, teachers also behave deceptively or lie forFig. 3: Situational Conditions for Other-oriented Deceptionwith Exampleseducational purposes, perhaps playing dumb for example [17].This deception can increase the student’s learning efficiency,and it produces long-term benefits to the mark, although thedeceit may be either short or long-term.5. One-Shot Casual Lies are common in general conversation [20]. Generally, deceivers act deceptively or tell a lie tomaintain the mark’s emotional state for good. For example,general lies such as “you look nice today” or “I like yourclothes” are obvious examples of 1-shot casual lies. These arenot life-critical situations. “That was a great presentation” canalso be another such example.6. Flattery also ranges from the short- to long-term to makethe mark’s emotional state beneficial to their performance.Persistent flattery is an example (e.g., “a**-kissing”) thatmakes the mark feel undeservedly better about themselves fora relatively long-term period. In this long-term case, benefit(payoff) accrues for both the mark and the deceiver, but wefocus for now solely on the benefits to the mark.7. Peoples sometimes feign weakness to make marks feelbetter by helping deceivers in short-term periods. Marks canmaintain emotionally good state or feel better and confidentfrom this deception. For example, a woman might pretend notto be able to open a jar just to make the man feel better andmore confident about himself.8. One-shot Jokes or more persistent kidding using deception is also an example of short-term lies, since theyaim to maintain a good atmosphere of social community bymaking marks feel at ease perhaps by stating falsehoods aboutthemselves, others, or a situation in a humorous and nontruthful way.9. Promotion of suspension of disbelief uses deceptionto provide the mark with fun and enjoyment. For example,movies, magic, or other fictional works use illusion to deceivemarks. This differs from other examples of deception, sincethe mark voluntarily allows herself to be deceived.10. A masquerade is characterized by deception that persists

for extended periods of time to create an illusion regardingsomething that does not exist, but may make the mark feelbetter about themselves.11. Sometimes, people hide distressing information or negative situations from others, assuming they may be able toresolve it on their own without additional help from the markand so not induce anxiety in the deceived.HRI s

Motion camouflage has also been studied for robot systems. Unlike the previous type of camouflage, motion camouflage is a behavioral deception capability observed in dragonflies. By following indirect trajectories, dragonflies can deceptively ap-proach as if they were remaining stationary from the perspec-tive of the prey.

Related Documents:

robot - kuka kr iontec p. 8/9 range of electrospindles for industrial robots gamma di elettromandrini per robot industriali p. 6/7 robot - kuka kr quantec p. 12/13 robot - kuka kr quantec robot - kuka kr 360 fortec robot - kuka kr 500 fortec robot - kuka kr 600 fortec p. 16/17 rotary tables tavole rotanti p. 20/21

Joint Publication 3-13.4. . deception target is the adversary decision maker. with the authority to make the decision that will achieve the deception objective. The deception target or targets are the key individual

which is followed by an analysis of Mother Courage and her children focusing on the aspect of self-deception and the sense of being sacrificed in this matter, and finally moves to its conclusion. Self-deception Deception is a dirty art of creating a false sense of trust and belief in others. Indeed, it is the art of legitimizing

Butler Lies: Deception in Managing Social Interactions Recent research suggests that deception is frequently used to create a virtual barrier between users and unwanted conversations [42]. In one focus group study, participants were asked about the norms and practices of deception and honesty in online chat and instant messaging. The responses

1. The robot waits five seconds before starting the program. 2. The robot barks like a dog. 3. The robot moves forward for 3 seconds at 80% power. 4. The robot stops and waits for you to press the touch sensor. 5. The robot moves backwards four tire rotations. 6. The robot moves forward and uses the touch sensor to hit an obstacle (youth can .

steered robot without explicit model of the robot dynamics. In this paper first, a detailed nonlinear dynamics model of the omni-directional robot is presented, in which both the motor dynamics and robot nonlinear motion dynamics are considered. Instead of combining the robot kinematics and dynamics together as in [6-8,14], the robot model is

In order to explore the effect of robot types and task types on people s perception of a robot, we executed a 3 (robot types: autonomous robot vs. telepresence robot vs. human) x 2 (task types: objective task vs. subjective task) mixed-participants experiment. Human condition in the robot types was the control variable in this experiment.

New Jersey Student Learning Standards for English Language Arts . Page 1 of 12. Grade 4 . The standards define general, cross-disciplinary literacy expectations that must be met for students to be prepared to enter college and workforce training programs ready to succeed. The K–12 grade-specific standards define end-of-year expectations and a cumulative progression designed to enable .