Design Challenges And Principles For Wizard Of Oz Testing .

2y ago
41 Views
2 Downloads
1.27 MB
6 Pages
Last View : 17d ago
Last Download : 3m ago
Upload by : Kairi Hasson
Transcription

RAPID PROTOTYPING AND TESTINGDesign Challengesand Principles for Wizardof Oz Testing of LocationEnhanced ApplicationsUsing Wizard of Oz techniques, designers can efficiently test locationenhanced application prototypes during the early stages of design,without building a full-fledged application or deploying a sensinginfrastructure.Location-enhanced applications adapttheir behavior to or provide information about the location of people,places, and things. Boost Mobile’sLoopt service (www.boostmobile.com/boostloopt), for example, lets mobile phone usersidentify their friends’ current locations, while Navitime’s EZNaviWalk (http://brew.qualcomm.com/brew bnry/pdf/press kit/navitime.pdf) helps usersfind optimal routes to their destinations based onthe mode of transportation they select. Such location-enhanced applications are the most widelyadopted type of ubicomp application; analysts predict that their use will grow tremendously in thenear future.1The problem, however, is thatYang Lidesigninglocation-enhanced apUniversity of Washingtonplications is often difficult.2 ReJason I. Hongsearchers have built toolkits forCarnegie Mellon Universitydeveloping these applications,3–5but using the toolkits requiresJames A. Landaysignificant technical expertise.University of WashingtonThis makes it hard for interand Intel Research Seattleaction designers who lack a programming or hardware background to prototype, evaluate,and iterate on designs. In addition, location infrastructures aren’t always available—GPS, for example, doesn’t work indoors—and location-sensingtechnologies are still nonstandard and complex.In short, it takes significant effort to deploy a location-enhanced application so that you can realis10PERVASIVE computingtically test it with users. And, by that time, it’s oftentoo late and too expensive to make major changes.Here, we discuss Wizard of Oz techniques6 fortesting location-enhanced applications. WOz techniques are often employed in user interface (UI)prototyping and have been successfully used in several tools for early stage design.7 The WOz approach lets users try out a system before it’s fullydeveloped. It accomplishes this using a “wizard”to simulate system parts that require sophisticatedtechnologies, such as speech recognition7 or location sensing.8,9 This makes it easy to quicklyexplore many ideas because designers are less constrained by technical details.10 Previous researchhas performed field experiments with locationbased systems.8–11 Here, we focus on tool supportfor WOz testing of location-enhanced applications,which pose two new challenges: They must incorporate location contexts,which have a much larger design space thantraditional GUI input. The target setting is often a dynamically changing field environment, rather than a desk.To address these issues, we built Topiary, whichincludes a suite of WOz techniques for testinglocation-enhanced applications. We previously described how designers can use Topiary to prototypeand test location-enhanced applications.9 Here, weuse it as a proof of concept of WOz techniques andhighlight three important research issues relatedPublished by the IEEE Computer Society 1536-1268/07/ 25.00 2007 IEEE

to designing WOz testing techniques forubicomp applications: designing visual languages for WOztesting, allocating tasks between wizards anddesigners, and automating wizard tasks.Figure 1. A typical location-based WOz test setting. (a) An end user interacts with theend-user UI on a device, such as a PDA. (b) A wizard follows the user and updates hisor her location in the wizard UI on a tablet PC.Topiary overviewTopiary provides a set of high-level prototyping abstractions—maps, scenarios,and storyboards—that designers can usein the tool’s active map, storyboard, andtest workspaces.Designers use the active-map workspace to create a model of the location ofpeople, places, and things. They can thenplace graphical objects representing theseentities on a map to signify their locationsand spatial relationships to one another—for example, “Alice is in the café” and“Bob is far from Tom.”In the storyboard workspace, designerscan capture location contexts as scenarios. To create interface mockups, designers sketch pages that represent screens andcreate links that represent transitionsbetween pages. They can then use captured scenarios as link conditions or triggers. For example, the designer mightspecify that an application prototype automatically switches from one page toanother when “anyone moves near Bob.”Likewise, when users click on a button,the prototype could show different information depending on the user’s location.Designers can use Topiary to test adesign with real users by running the interface mockup on a mobile device, such asa PDA (see figure 1a). During a test, usersinteract with the interface mockup, whilea wizard follows them and updates thelocation of people and things on a separate device (see figure 1b). The test workspace has two components: the end-user UI that users see andinteract with (see figure 1a), andAPRIL–JUNE 2007Figure 2. Topiary’s wizard UI. The wizard map represents entities’ current location andorientation; to simulate updates, a wizard drags them on the map. The end-userscreen lets a wizard monitor the user’s action on the end-user UI. The storyboardanalysis window lets the wizard debug the interaction flow by highlighting the currentpage and the most recent transition. The radar view provides an overview of thewizard map. In this example, a wizard drags Bob’s arrow to reflect his orientation; themap in the end-user UI (and on the end-user screen) automatically rotates so that itsorientation is always consistent with the direction Bob is facing. the wizard UI, where designers simulate location contexts (see figure 2).map by either panning the map directly orcircling a target region in the radar view.A wizard simulates location contexts bymoving people and things around todynamically update their locations. If moving a person or a thing activates a storyboard transition, the end-user UI automatically transitions to a target page. Tosimulate the entity’s orientation change, thewizard rotates the arrow attached to theentity. A wizard navigates in the wizardDesigning visual languagesfor WOz testingUnderstanding how to design moreeffective WOz testing interfaces is one ofour major research goals. In our view, tohelp wizards better run user tests andbetter simulate ubicomp applications,we need more appropriate visual languages. Traditionally, researchers addressPERVASIVE computing11

Task allocation (%)RAPID PROTOTYPING AND TESTINGFigure 3. Workload distribution asdesigns progress. As a design matures,a wizard does less work, while a designerdoes more.WizardDesignerEarlyLateDesign stagevisual language issues only for creating adesign. For ubicomp applications, however, designing visual languages for testing is just as important.In the iterative design process, wizardsand designers play two different roles,which either the same person or differentpeople can perform. Ideally, WOz testingrequires a wizard to update the prototype’s state on the basis of the world state.A visual language for testing should helpa wizard observe dynamic changes in atest environment and easily specify thesechanges to update the prototype’s state.For example, in location-enhanced applications, dynamic changes refer to themovement of users and physical objects.In Topiary, the prototype’s state includes the locations of entities stored in theprototype’s repository (see the wizardmap in figure 2), and the current storyboard page (see the storyboard analysis window in figure 2).Most of the time, a wizard updates onlythe location of entities and Topiary automatically updates and maintains otherparts of the prototype’s state (for example, the current page) on the basis of thestoryboard’s interaction logic. To meetthe goal of having wizards update prototypes on the basis of the real-worldstate, we propose two basic design principles. A visual language shouldvisualize the prototype’s current stateand perceive the difference betweenthat state and the real-world state, and provide an interaction vocabulary thatlets the wizard easily update the prototype state on the basis of his or herdirect observations.Topiary’s wizard map, for example,provides a simple overview of entitylocations maintained by a prototype,which lets the wizard easily see whereupdates are needed. Wizards need onlycapture two things about entities: theirposition and orientation. Both are basedon the wizard’s direct observation, andhe or she can change them through directmanipulation.When we evaluated Topiary withseven researchers and interface designers, they rated the tool’s wizard interfacehighest among all features.9 Participantsparticularly appreciated how straightforward the wizard interface is. We alsoconducted six field experiments withfour people to test various designs of atour-guide application. While acting aswizards, we found Topiary’s interfacewas effective at capturing users’ movement while walking. Our participantsconfirmed Topiary’s WOz testing effectiveness: they didn’t realize their locations were being updated by a wizard,rather than by real sensors.Allocating tasks make it easy for a wizard to efficiently12PERVASIVE computingIn designing Topiary’s WOz interfaces,we found that there was often a tradeoff in allocating tasks between designersand wizards. On one hand, designers canquickly create rough designs with fewdetails, placing the burden on the wizard to simulate more sophisticatedbehaviors. On the other hand, designerscan shoulder more of this burden by initially specifying more detail and functionality up front so that it can be automatically executed at test time, makingthe wizard’s job much easier.For example, to show users the shortest path in Topiary, a designer can—atdesign time—draw a road network on amap (see the bold brown lines on thewizard map in figure 2) and specify astarting point and destination (such asBob’s current location and the parkinglot, respectively). Then, at test time, Topiary automatically constructs a shortest path on the basis of Bob’s currentlocation (see the bold pink line on figure2’s end-user screen). Alternatively, a designer can leave this feature unspecifiedand have the Wizard draw lines representing the shortest paths on the enduser’s screen during the tests. Clearly, theformer requires more work from thedesigner, while the latter requires morefrom the wizard. The latter also lets adesigner quickly try out rough ideas. Asa design evolves, designers need to solidify their design ideas into a design thatis validated by user tests (see figure 3).Given this, a WOz tool should letdesigners choose appropriate task allocations for wizards and designers as adesign evolves, to ensure a smooth iterative design process. For example, a toolcan offer designers multiple options fordesigning the same feature. Such optionsmight vary in terms of how much workthey require from designers and wizards,and how large a design/testing scale theycan handle. For example, while havinga wizard manually draw paths requireswww.computer.org/pervasive

Figure 4. Specifying sensing errors.(a) The Specify Sensing Errors dialog box.Topiary uses the sensing accuracy (24meters) as the standard deviation forgenerating Gaussian noise. (b) The enduser screen. The translucent circlerepresents the region of Bob’s possiblelocations. The circle’s size dynamicallychanges according to the distancebetween the wizard’s selected locationand the location generated by thesensing-error model.little work from a designer, it’s very inefficient for a wizard to do in a complextesting environment. So, tools should letdesigners choose an appropriate approach based on whether they preferquick iteration, larger-scale design, orsomething in between. This creates another design challenge for tool builders:how to present this redundant supportto tool users so that they’ll know whichapproach to choose for a particulardesign at a particular design stage.Automating wizard tasksIt can be challenging for wizards tokeep track of dynamically changing environments while testing ubicomp applications. We’ve therefore been designingand experimenting with various techniques to automate wizard tasks. Here,we discuss two such techniques. Although both are specific to locationenhanced applications, the techniquesaddress two common problems of WOztesting of ubicomp applications: simulating sensing inaccuracy to better approximate realistic testing, and relievingwizards from routine tasks.Simulating sensing inaccuracySensing inaccuracy is an important issuein designing ubicomp applications. Forlocation-enhanced applications, locationacquisition depends heavily on sensed data(such as wireless signals, GPS, or accelerometer readings) and inferences (PlaceLab uses temporal probabilistic reasoning, for example5). Sensed data is oftennoisy and sometimes even abnormalAPRIL–JUNE 2007owing to environmental variations or sensor hardware failure. Inferencing technologies are also imperfect. Consequently,location context is inherently ambiguous—a reported location isn’t necessarilythe user’s real location, for example.Simulating sensing inaccuracy canhelp designers uncover related usabilityissues in the early stages of design. In ourprevious Topiary version, a wizard couldmanually simulate sensing errors bydragging entities to a random map position. However, to lower wizards’ cognitive load and systematically examinehow sensing inaccuracy influences usability, we wanted to offer explicit support for modeling sensing inaccuracy.Suede, a tool for designing speechbased UIs, lets designers specify speechrecognition accuracy and automaticallygenerate random errors during a test.7Inspired by Suede, our new version ofTopiary lets designers model locationsensing inaccuracy by specifying a targetlocation-sensing infrastructure’s stabilityand accuracy (see figure 4a). On the basisof such specifications, Topiary automatically generates location-sensing errors attest time using a simple probabilisticmodel. Generally speaking, it’s hard toattribute a location-sensing error purelyto sensor hardware or inference algorithms. So, we don’t make distinctions inTopiary as to the ambiguity’s source.As figure 4a shows, a designer canspecify sensing inaccuracy using theSpecify Sensing Errors dialog box. Bychecking the “Apply sensing errors”option, a designer can add noise to thewizard’s simulation during a test. Forexample, when a wizard drags Bob to amap position, Topiary would automatically move Bob to another random position—such as five meters away—basedon the generated noise.Before testing, designers must specifyhow often a sensing infrastructure mightwork under normal conditions. Thisimplies the sensing infrastructure’s stability. In an abnormal condition, a sensinginfrastructure gives completely unreasonable location reports. A designer canspecify a percentage by either typing it inthe text field or dragging the slider. During a test, the system uses this percentageto sample whether the simulated sensinginfrastructure is in a normal condition.When it’s not, the system randomly generates a map location based on a uniformprobabilistic distribution, without considering the entity’s location as set by thewizard. Otherwise, Topiary generates arandom location around the wizard’slocation by applying Gaussian noise. Topiary automatically centers a translucentcircle around a location generated by thesensing-error model on the end user’sscreen; this circle represents the entity’sregion of possible location (see figure 4b).The smaller the circle, the more accuratethe location sensing.Currently, we use a simple model forPERVASIVE computing13

RAPID PROTOTYPING AND TESTINGFigure 5. Topiary’s cruise control. Thetool can automatically update an entity’slocation on the basis of its currentvelocity. Wizards can deactivate thefunction by manually dragging the targetentity. (The “gray star men,” shown hereto illustrate Bob’s path, aren’t visible inthe actual interface.)Bobsimulating sensing errors. Given the diversity of available sensing technologies aswell as the physical world’s complexity,our approach can’t model all errors.However, we felt it was more importantto make the model easy for nontechnologists to use and understand. Our approach also lets wizards simulate otherlocation errors that might occur in a target situation by manually dragging anentity to a random position.Wizards’ cruise controlIn our field study, we often observedthat participants moved straight aheadat a constant speed, such as when theyknew their destination and were walkingdown a street. To relieve wizards fromroutine updating tasks in such situations,we designed wizards’ cruise control.Topiary automatically infers an entity’s velocity on the basis of its trajectory as produced by the wizard. Giventhis information, Topiary can automatically help a wizard update an entity’slocation. We designed this feature on thebasis of the metaphor of automobilecruise control, which lets drivers maintain a constant speed without steppingon the accelerator or brake. In Topiary,a red arrow indicates an entity’s velocity14PERVASIVE computingduring a test (see figure 5). The arrow’slength indicates the entity’s speed. A wizard might determine that an end user ismoving straight ahead at a constantspeed on the basis of observation of anentity’s motion patterns and the testingenvironment’s geographical attributes.In such cases, the wizard can turn on theentity’s cruise control option, and Topiary will automatically move it at the current speed. If the wizard thinks the automatic update is inconsistent with the enduser’s current location, he or she canmanually drag the entity to the correctmap position. This mediation automatically turns off the automatic update,similar to how automobile drivers candeactivate cruise control by stepping onthe accelerator or brake.We’re developing a new suiteof tools for WOz testing.This work involves severalideas that researchers canextend to WOz testing of general ubicomp applications.First, we’d like Topiary to supportimpromptu design during test sessions.Designers can use a WOz approach torapidly explore various designs. In theearly stages of design, ideas are oftenvague—making it hard to cover all system aspects—while designers often havedesign inspirations while running a test.Given this, a WOz testing interfaceshould let designers try out different output options and even create new feedback in response to unexpected discoveries made during a test. Building sucha feature entails two challenges. First,how much flexibility should a wizardhave? A WOz test shouldn’t overload thewizard, and achieving flexibility shouldrequire little effort. Second, how can wecapture and solidify impromptu designsso that they’re not ephemeral?Second, combining WOz testing withsensor-based testing will let users testtheir designs in more realistic situations.It will also enable longitudinal, largescale design testing, which is useful forgetting feedback and experience fromusers’ daily lives rather than from controlled experiments. As a design matures,the need for wizards to mediate a testdecreases, and sensor-based testing canplay a larger role. Between the two extremes—complete WOz testing and complete sensor-based testing—the two approaches can work together. A toolshould thus let designers transitionsmoothly from WOz testing to sensorbased testing as a design evolves.Third, implementing multiwizard testing in Topiary would let users scale uptesting when a target setting involvesmultiple entities and complex activities.For example, each wizard could use aseparate device to observe and track adifferent entity’s movement. However,the cost of multiwizard testing in a realistic situation is often high in the earlystages of design. Consequently, it wouldwww.computer.org/pervas

(b) A wizard follows the user and updates his or her location in the wizard UI on a tablet PC. Figure 2. Topiary’s wizard UI. The wizard map represents entities’ current location and orientation; to simulate updates, a wizard drags them on the map. The end-user screen lets a wizard mon

Related Documents:

Bruksanvisning för bilstereo . Bruksanvisning for bilstereo . Instrukcja obsługi samochodowego odtwarzacza stereo . Operating Instructions for Car Stereo . 610-104 . SV . Bruksanvisning i original

10 tips och tricks för att lyckas med ert sap-projekt 20 SAPSANYTT 2/2015 De flesta projektledare känner säkert till Cobb’s paradox. Martin Cobb verkade som CIO för sekretariatet för Treasury Board of Canada 1995 då han ställde frågan

service i Norge och Finland drivs inom ramen för ett enskilt företag (NRK. 1 och Yleisradio), fin ns det i Sverige tre: Ett för tv (Sveriges Television , SVT ), ett för radio (Sveriges Radio , SR ) och ett för utbildnings program (Sveriges Utbildningsradio, UR, vilket till följd av sin begränsade storlek inte återfinns bland de 25 största

Hotell För hotell anges de tre klasserna A/B, C och D. Det betyder att den "normala" standarden C är acceptabel men att motiven för en högre standard är starka. Ljudklass C motsvarar de tidigare normkraven för hotell, ljudklass A/B motsvarar kraven för moderna hotell med hög standard och ljudklass D kan användas vid

LÄS NOGGRANT FÖLJANDE VILLKOR FÖR APPLE DEVELOPER PROGRAM LICENCE . Apple Developer Program License Agreement Syfte Du vill använda Apple-mjukvara (enligt definitionen nedan) för att utveckla en eller flera Applikationer (enligt definitionen nedan) för Apple-märkta produkter. . Applikationer som utvecklas för iOS-produkter, Apple .

This presentation and SAP's strategy and possible future developments are subject to change and may be changed by SAP at any time for any reason without notice. This document is 7 provided without a warranty of any kind, either express or implied, including but not limited to, the implied warranties of merchantability, fitness for a .

och krav. Maskinerna skriver ut upp till fyra tum breda etiketter med direkt termoteknik och termotransferteknik och är lämpliga för en lång rad användningsområden på vertikala marknader. TD-seriens professionella etikettskrivare för . skrivbordet. Brothers nya avancerade 4-tums etikettskrivare för skrivbordet är effektiva och enkla att

Den kanadensiska språkvetaren Jim Cummins har visat i sin forskning från år 1979 att det kan ta 1 till 3 år för att lära sig ett vardagsspråk och mellan 5 till 7 år för att behärska ett akademiskt språk.4 Han införde två begrepp för att beskriva elevernas språkliga kompetens: BI