The State Of The Art In Automating Usability Evaluation Of User Interfaces

1y ago
12 Views
2 Downloads
2.21 MB
47 Pages
Last View : 8d ago
Last Download : 3m ago
Upload by : Nadine Tse
Transcription

The State of the Art in Automating Usability Evaluationof User InterfacesMELODY Y. IVORY AND MARTI A. HEARSTUniversity of California, BerkeleyUsability evaluation is an increasingly important part of the user interface designprocess. However, usability evaluation can be expensive in terms of time and humanresources, and automation is therefore a promising way to augment existingapproaches. This article presents an extensive survey of usability evaluation methods,organized according to a new taxonomy that emphasizes the role of automation. Thesurvey analyzes existing techniques, identifies which aspects of usability evaluationautomation are likely to be of use in future research, and suggests new ways to expandexisting approaches to better support usability evaluation.Categories and Subject Descriptors: H.1.2 [Information Systems]: User/MachineSystems—human factors; human information processing; H.5.2 [InformationSystems]: User Interfaces—benchmarking; evaluation/methodology; graphical userinterfaces (GUI )General Terms: Human FactorsAdditional Key Words and Phrases: Graphical user interfaces, taxonomy, usabilityevaluation automation, web interfaces1. INTRODUCTIONUsability is the extent to which a computersystem enables users, in a given contextof use, to achieve specified goals effectively and efficiently while promoting feelings of satisfaction.1 Usability evaluation(UE) consists of methodologies for measuring the usability aspects of a system’suser interface (UI) and identifying specificproblems [Dix et al. 1998; Nielsen 1993].1Adapted from ISO9241 [International StandardsOrganization 1999].Usability evaluation is an important partof the overall user interface design process, which consists of iterative cyclesof designing, prototyping, and evaluating[Dix et al. 1998; Nielsen 1993]. Usabilityevaluation is itself a process that entailsmany activities depending on the methodemployed. Common activities include.—Capture collecting usability data, suchas task completion time, errors, guideline violations, and subjective ratings;—Analysis interpreting usability data toidentify usability problems in the interface; andThis research was sponsored in part by the Lucent Technologies Cooperative Research Fellowship Program,a GAANN fellowship, and Kaiser Permanente.Authors’ addresses: M. Y. Ivory, Computer Science Division, University of California, Berkeley, Berkeley,CA 94720-1776; email: ivory@CS.Berkeley.edu; M. A. Hearst, School of Information Management andSystems, University of California, Berkeley, Berkeley, CA 94720-4600; email: hearst@SIMS.Berkeley.edu.Permission to make digital/hard copy of part or all of this work for personal or classroom use is grantedwithout fee provided that the copies are not made or distributed for profit or commercial advantage, thecopyright notice, the title of the publication, and its date appear, and notice is given that copying is bypermission of the ACM, Inc. To copy otherwise, to republish, to post on servers, or to redistribute to lists,requires prior specific permission and/or a fee.c 2001ACM 0360-0300/01/1200–0470 5.00ACM Computing Surveys, Vol. 33, No. 4, December 2001, pp. 470–516.

Automating Usability Evaluation of User Interfaces—Critique: suggesting solutions or improvements to mitigate problems.A wide range of usability evaluationtechniques have been proposed, and a subset of these is currently in common use.Some evaluation techniques, such as formal user testing, can only be applied after the interface design or prototype hasbeen implemented. Others, such as heuristic evaluation, can be applied in the earlystages of design. Each technique has itsown requirements, and generally different techniques uncover different usabilityproblems.Usability findings can vary widely whendifferent evaluators study the same userinterface, even if they use the same evaluation technique [Jeffries et al. 1991;Molich et al. 1998, 1999; Nielsen 1993].Two studies in particular, the first andsecond comparative user testing studies(CUE-1 [Molich et al. 1998] and CUE-2[Molich et al. 1999]), demonstrated lessthan a 1% overlap in findings among fourand eight independent usability testingteams for evaluations of two user interfaces. This result implies a lack of systematicity or predictability in the findingsof usability evaluations. Furthermore, usability evaluation typically only covers asubset of the possible actions users mighttake. For these reasons, usability expertsoften recommend using several differentevaluation techniques [Dix et al. 1998;Nielsen 1993].How can systematicity of results andfuller coverage in usability assessment beachieved? One solution is to increase thenumber of usability teams evaluating thesystem and to increase the number ofstudy participants. An alternative is to automate some aspects of usability evaluation, such as the capture, analysis, or critique activities.Automation of usability evaluation hasseveral potential advantages over nonautomated evaluation, such as the following.—Reducing the cost of usability evaluation. Methods that automate capture,analysis, or critique activities can decrease the time spent on usability evaluation and consequently the cost. ForACM Computing Surveys, Vol. 33, No. 4, December 2001.471example, software tools that automatically log events during usability testingeliminate the need for manual logging,which can typically take up a substantial portion of evaluation time.—Increasing consistency of the errorsuncovered. In some cases it is possibleto develop models of task completionwithin an interface, and software toolscan consistently detect deviations fromthese models. It is also possible todetect usage patterns that suggestpossible errors, such as immediate taskcancellation.—Predicting time and error costs acrossan entire design. As previously discussed, it is not always possible toassess every single aspect of an interface using nonautomated evaluation.Software tools, such as analyticalmodels, make it possible to widen thecoverage of evaluated features.—Reducing the need for evaluation expertise among individual evaluators.Automating some aspects of evaluation,such as the analysis or critique activities, could aid designers who do not haveexpertise in those aspects of evaluation.—Increasing the coverage of evaluatedfeatures. Due to time, cost, and resourceconstraints, it is not always possibleto assess every single aspect of aninterface. Software tools that generateplausible usage traces make it possibleto evaluate aspects of interfaces thatmay not otherwise be assessed.—Enabling comparisons between alternative designs. Because of time, cost,and resource constraints, usabilityevaluations typically assess only onedesign or a small subset of featuresfrom multiple designs. Some automated analysis approaches, such asanalytical modeling and simulation,enable designers to compare predictedperformance for alternative designs.—Incorporating evaluation within thedesign phase of UI development, asopposed to being applied after implementation. This is important becauseevaluation with most nonautomated

472methods can typically be done only afterthe interface or prototype has been builtand changes are more costly [Nielsen1993]. Modeling and simulation toolsmake it possible to explore UI designsearlier.It is important to note that we considerautomation to be a useful complementand addition to standard evaluation techniques such as heuristic evaluation andusability testing—not a substitute. Different techniques uncover different kinds ofproblems, and subjective measures suchas user satisfaction are unlikely to be predictable by automated methods.Despite the potential advantages, thespace of usability evaluation automationis quite underexplored. In this article, wediscuss the state of the art in usabilityevaluation automation, and highlight theapproaches that merit further investigation. Section 2 presents a taxonomy forclassifying UE automation, and Section 3summarizes the application of this taxonomy to 132 usability methods. Sections4 through 8 describe these methods inmore detail, including our summative assessments of automation techniques. Theresults of this survey suggest promisingways to expand existing approaches tobetter support usability evaluation.2. TAXONOMY OF USABILITY EVALUATIONAUTOMATIONIn this discussion, we make a distinctionbetween WIMP (windows, icons, pointer,and mouse) interfaces and Web interfaces,in part because the nature of these interfaces differs and in part because the usability methods discussed have often onlybeen applied to one type or the other inthe literature. WIMP interfaces tend to bemore functionally oriented than Web interfaces. In WIMP interfaces, users complete tasks, such as opening or saving afile, by following specific sequences of operations. Although there are some functionalWeb applications, most Web interfacesoffer limited functionality (i.e., selectinglinks or completing forms), but the pri-M. Y. Ivory and M. A. Hearstmary role of many Web sites is to provideinformation. Of course, the two types ofinterfaces share many characteristics; wehighlight their differences when relevantto usability evaluation.Several surveys of UE methods forWIMP interfaces exist; Hom [1998] andHuman Factors Engineering [1999] provide a detailed discussion of inspection, inquiry, and testing methods (these termsare defined below). Several taxonomiesof UE methods have also been proposed. The most commonly used taxonomy is one that distinguishes betweenpredictive (e.g., GOMS analysis and cognitive walkthrough, also defined below)and experimental (e.g., usability testing) techniques [Coutaz 1995]. Whitefieldet al. [1991] present another classificationscheme based on the presence or absenceof a user and a computer. Neither of thesetaxonomies reflects the automated aspectsof UE methods.The sole existing survey of usabilityevaluation automation, by Balbo [1995],uses a taxonomy that distinguishes amongfour approaches to automation:—Nonautomatic: methods “performed byhuman factors specialists”;—Automatic Capture: methods that “relyon software facilities to record relevantinformation about the user and the system, such as visual data, speech acts,keyboard and mouse actions”;—Automatic Analysis: methods that are“able to identify usability problems automatically”; and—Automatic Critic: methods that “notonly point out difficulties but proposeimprovements.”Balbo uses these categories to classify13 common and uncommon UE methods.However, most of the methods surveyedrequire extensive human effort, becausethey rely on formal usability testingand/or require extensive evaluator interaction. For example, Balbo classifies several techniques for processing log files asautomatic analysis methods despite thefact that these approaches require formalACM Computing Surveys, Vol. 33, No. 4, December 2001.

Automating Usability Evaluation of User Interfacestesting or informal use to generate thoselog files. What Balbo calls an automaticcritic method may require the evaluator tocreate a complex UI model as input. Thusthis classification scheme is somewhatmisleading since it ignores the nonautomated requirements of the UE methods.2.1. Proposed TaxonomyTo facilitate our discussion of the stateof automation in usability evaluation, wehave grouped UE methods along the following four dimensions.—Method Class: describes the type of evaluation conducted at a high level (e.g.,usability testing or simulation);—Method Type: describes how the evaluation is conducted within a method class,such as thinking-aloud protocol (usability testing class) or information processor modeling (simulation class);—Automation Type: describes the evaluation aspect that is automated (e.g., capture, analysis, or critique); and—Effort Level: describes the type of effort required to execute the method (e.g.,model development or interface usage).2.1.1. Method Class. We classify UEmethods into five method classes asfollows.—Testing: an evaluator observes users interacting with an interface (i.e., completing tasks) to determine usabilityproblems.—Inspection: an evaluator uses a set of criteria or heuristics to identify potentialusability problems in an interface.—Inquiry: users provide feedback on aninterface via interviews, surveys, andthe like.— Analytical Modeling: an evaluator employs user and interface models to generate usability predictions.—Simulation: an evaluator employs userand interface models to mimic a user interacting with an interface and reportthe results of this interaction (e.g., sim-ACM Computing Surveys, Vol. 33, No. 4, December 2001.473ulated activities, errors, and other quantitative measures).UE methods in the testing, inspection,and inquiry classes are appropriate forformative (i.e., identifying specific usability problems) and summative (i.e., obtaining general assessments of usability) purposes. Analytical modeling and simulationare engineering approaches to UE that enable evaluators to predict usability withuser and interface models. Software engineering practices have had a major influence on the first three classes, whereas thelatter two, analytical modeling and simulation, are quite similar to performanceevaluation techniques used to analyze theperformance of computer systems [Ivory2001; Jain 1991].2.1.2. Method Type. There is a wide rangeof evaluation methods within the testing,inspection, inquiry, analytical modeling,and simulation classes. Rather than discuss each method individually, we grouprelated methods into method types; thistype typically describes how evaluation isperformed. We present method types inSections 4 through 8.2.1.3. AutomationType. WeadaptedBalbo’s automation taxonomy (describedabove) to specify which aspect of a usability evaluation method is automated.—None: no level of automation supported(i.e., evaluator performs all aspects ofthe evaluation method);—Capture: software automatically records usability data (e.g., logging interface usage);—Analysis: software automatically identifies potential usability problems; and—Critique: software automates analysisand suggests improvements.2.1.4. Effort Level. We also expandedBalbo’s automation taxonomy to includeconsideration of a method’s nonautomated requirements. We augment eachUE method with an attribute called effortlevel; this indicates the human effort required for method execution.

474M. Y. Ivory and M. A. HearstFig. 1. Summary of our taxonomy for classifying usability evaluation methods. Theright side of the figure demonstrates the taxonomy with two evaluation methods discussed in later sections.—Minimal Effort: does not require interface usage or modeling.—Model Development: requires the evaluator to develop a UI model and/ora user model in order to employ themethod.—Informal Use: requires completion offreely chosen tasks (i.e., unconstraineduse by a user or evaluator).—Formal Use: requires completion of specially selected tasks (i.e., constraineduse by a user or evaluator).These levels are not necessarily orderedby the amount of effort required, since thisdepends on the method employed.2.1.5. Summary. Figure 1 provides a synopsis of our taxonomy and demonstratesit with two evaluation methods. The taxonomy consists of: a method class (testing, inspection, inquiry, analytical modeling, and simulation); a method type (e.g.,log file analysis, guideline review, andsurveys); an automation type (none, capture, analysis, and critique); and an effortlevel (minimal, model, informal, and for-mal). In the remainder of this article, weuse this taxonomy to analyze evaluationmethods.3. OVERVIEW OF USABILITY EVALUATIONMETHODSWe surveyed 75 UE methods applied toWIMP interfaces, and 57 methods appliedto Web UIs. Of these 132 methods, only29 apply to both Web and WIMP UIs.We determined the applicability of eachmethod based on the types of interfacesa method was used to evaluate in theliterature and our judgment of whetherthe method could be used with othertypes of interfaces. Table I combines survey results for both types of interfacesshowing method classes (bold entriesin the first column) and method typeswithin each class (entries that are notbold in the first column). Each entry inColumns 2 through 5 depicts specificUE methods along with the automationsupport available and the effort requiredto employ automation. For some UE methods, we discuss more than one approach;ACM Computing Surveys, Vol. 33, No. 4, December 2001.

Automating Usability Evaluation of User Interfaces475Table I. Automation Support for WIMP and Web UE MethodsaMethod ClassMethod TypeTestingThinking-Aloud ProtocolQuestion-Asking ProtocolShadowing MethodCoaching MethodTeaching MethodCodiscovery LearningPerformance MeasurementLog File AnalysisRetrospective TestingRemote TestingInspectionGuideline ReviewCognitive WalkthroughPluralistic WalkthroughHeuristic EvaluationPerspective-Based InspectionFeature InspectionFormal Usability InspectionConsistency InspectionStandards InspectionInquiryContextual InquiryField ObservationFocus g LogsScreen SnapshotsUser FeedbackAnalytical ModelingGOMS AnalysisUIDE AnalysisCognitive Task AnalysisTask-Environment AnalysisKnowledge AnalysisDesign AnalysisProgrammable User ModelsSimulationInformation Proc. ModelingPetri Net ModelingGenetic Algorithm ModelingInformation Scent ModelingAutomation TypeTotalPercentNoneF (1)F (1)F (1)F (1)F (1)F (1)F (1)Automation TypeCaptureAnalysisF (7)CritiqueIFM (19) F (1)IF (3)IF (6)IF (2)IF (1)IF (1)IF (1)IF (1)F (1)IF (1)IF (1)IF (1)IF (1)IF (1)IF (1)IF (1)IF (1)IF (1)IF (1)IF (1)(8)M (11)†F (1)IF (2)M (4)M (2)M (2)M (1)M (1)M (2)M (2)M (1)M (9)FM (1)(1)M (1)3067%613%818%12%aAnumber in parentheses indicates the number of UE methods surveyed for aparticular method type and automation type. The effort level for each method isrepresented as: minimal (blank), formal (F), informal (I), and model (M). Indicates that either formal or informal interface use is required. In addition, amodel may be used in the analysis.† Indicates that methods may or may not employ a model.hence, we show the number of methods surveyed in parentheses beside theeffort level. Some approaches provideautomation support for multiple methodtypes (see Appendix A). Table I contains110 methods because some methods areapplicable to multiple method types;we also only depict methods applicableACM Computing Surveys, Vol. 33, No. 4, December 2001.to both WIMP and Web UIs once. Table IIprovides descriptions of all method types.There are major differences in automation support among the five methodclasses. Overall, automation patterns aresimilar for WIMP and Web interfaces,with the exception that analytical modeling and simulation are far less explored

476M. Y. Ivory and M. A. HearstTable II. Descriptions of the WIMP and Web UE Method Types Depicted in Table IMethod ClassMethod TypeTestingThinking-Aloud ProtocolQuestion-Asking ProtocolShadowing MethodCoaching MethodTeaching MethodCodiscovery LearningPerformance MeasurementLog File AnalysisRetrospective TestingRemote TestingInspectionGuideline ReviewCognitive WalkthroughPluralistic WalkthroughHeuristic EvaluationPerspective-Based InspectionFeature InspectionFormal Usability InspectionConsistency InspectionStandards InspectionInquiryContextual InquiryField ObservationFocus g LogsScreen SnapshotsUser FeedbackAnalytical ModelingGOMS AnalysisUIDE AnalysisCognitive Task AnalysisTask-Environment AnalysisKnowledge AnalysisDesign AnalysisProgrammable User ModelsSimulationInformation Proc. ModelingPetri Net ModelingGenetic Algorithm ModelingInformation Scent ModelingDescriptionuser talks during testtester asks user questionsexpert explains user actions to testeruser can ask an expert questionsexpert user teaches novice usertwo users collaboratetester records usage data during testtester analyzes usage datatester reviews videotape with usertester and user are not colocated during testexpert checks guideline conformanceexpert simulates user’s problem solvingmultiple people conduct cognitive walkthroughexpert identifies violations of heuristicsexpert conducts narrowly focused heuristic evaluationexpert evaluates product featuresexpert conducts formal heuristic evaluationexpert checks consistency across productsexpert checks for standards complianceinterviewer questions users in their environmentinterviewer observes system use in user’s environmentmultiple users participate in a discussion sessionone user participates in a discussion sessioninterviewer asks user specific questionsuser provides answers to specific questionsuser records UI operationsuser captures UI screensuser submits commentspredict execution and learning timeconduct GOMS analysis within a UIDEpredict usability problemsassess mapping of user’s goals into UI taskspredict learnabilityassess design complexitywrite program that acts like a usermimic user interactionmimic user interaction from usage datamimic novice user interactionmimic Web site navigationin the Web domain than for WIMP interfaces (2 vs. 16 methods). Appendix A showsthe information in Table I separated by UItype.Table I shows that automation in general is greatly underexplored. Methodswithout automation support represent67% of the methods surveyed, and methods with automation support collectivelyrepresent only 33%. Of this 33%, capturemethods represent 13%, analysis methodsrepresent 18%, and critique methods represent 2%. All but two of the capture methods require some level of interface usage;genetic algorithms and information scentmodeling both employ simulation to generate usage data for subsequent analysis.Overall, only 29% of all of the methods surveyed (nonautomated and automated) donot require formal or informal interfaceuse to employ.To provide the fullest automation support, software would have to critique interfaces without requiring formal or informal use. Our survey found that thislevel of automation has been developedfor only one method type: guideline review (e.g., Farenc and Palanque [1999],Lowgren and Nordvist [1992], and Scholtzand Laskowski [1998]). Guideline reviewACM Computing Surveys, Vol. 33, No. 4, December 2001.

Automating Usability Evaluation of User Interfacesmethods automatically detect and reportusability violations and then make suggestions for fixing them (discussed furtherin Section 5).Of those methods that support the nextlevel of automation—analysis—Table Ishows that analytical modeling and simulation methods represent the majority.Most of these methods do not require formal or informal interface use.The next sections discuss the variousUE methods and their automation in moredetail. Some methods are applicable toboth WIMP and Web interfaces; however,we make distinctions where necessaryabout a method’s applicability. Our discussion also includes our assessments of automated capture, analysis, and critique techniques using the criteria:—Effectiveness: how well a method discovers usability problems,—Ease of use: how easy a method is to employ,—Ease of learning: how easy a method isto learn, and—Applicability: how widely applicable amethod is to WIMP and/or Web UIsother than to those originally applied.We highlight the effectiveness, ease of use,ease of learning, and applicability of automated methods in our discussion of eachmethod class. Ivory [2001] provides a detailed discussion of all nonautomated andautomated evaluation methods surveyed.4. AUTOMATING USABILITY TESTINGMETHODSUsability testing with real participantsis a fundamental usability evaluationmethod [Nielsen 1993; Shneiderman1998]. It provides an evaluator withdirect information about how people usecomputers and what some of the problemsare with the interface being tested. During usability testing, participants use thesystem or a prototype to complete a predetermined set of tasks while the testerrecords the results of the participants’work. The tester then uses these results todetermine how well the interface supportsACM Computing Surveys, Vol. 33, No. 4, December 2001.477users’ task completion as well as othermeasures, such as number of errors andtask completion time.Automation has been used predominantly in two ways within usabilitytesting: automated capture of use dataand automated analysis of these dataaccording to some metrics or a model(referred to as log file analysis in Table I).In rare cases methods support both automated capture and analysis of usage data[Al-Qaimari and McRostie 1999; Uehlingand Wolf 1995].4.1. Automating Usability Testing Methods:Capture SupportMany usability testing methods requirethe recording of the actions a user makeswhile exercising an interface. This can bedone by an evaluator taking notes whilethe participant uses the system, eitherlive or by repeatedly viewing a videotapeof the session: both are time-consumingactivities. As an alternative, automatedcapture techniques can log user activityautomatically. An important distinctioncan be made between information that iseasy to record but difficult to interpret(e.g., keystrokes) and information thatis meaningful but difficult to automatically label, such as task completion. Automated capture approaches vary with respect to the granularity of informationcaptured.Within the usability testing class ofUE, automated capture of usage data issupported by two method types: performance measurement and remote testing.Both require the instrumentation of auser interface, incorporation into a userinterface management system (UIMS), orcapture at the system level. A UIMS is asoftware library that provides high-levelabstractions for specifying portable andconsistent interface models that are thencompiled into UI implementations [Olsen,Jr. 1992]. Table III provides a synopsis ofautomated capture methods discussed inthe remainder of this section. We discusssupport available for WIMP and Web UIsseparately.

478M. Y. Ivory and M. A. HearstTable III. Synopsis of Automated Capture Support for Usability Testing MethodsaMethod Class:Usability TestingAutomation Type: CaptureMethod Type:Performance Measurement—record usage data during test (7 methods)UE MethodUIEffortLog low-level events (Hammontree et al. [1992])WIMPFLog UIMS events (UsAGE, IDCAT)WIMPFLog system-level events (KALDI)WIMPFLog Web server requests (Scholtz and Laskowski [1998])WebFLog client-side activities (WebVIP, WET)WebFMethod Type:Remote Testing—tester and user are not colocated (3 methods)UE MethodUIEffortEmploy same-time different-place testing (KALDI)WIMP, WebIFEmploy different-time different-place testing (journaled sessions)WIMP, WebIFAnalyze a Web site’s information organization (WebCAT)WebIFa The effort level for each method is represented as: minimal (blank), formal (F), informal (I),and model (M).4.1.1. Automating Usability Testing Methods:Capture Support—WIMP UIs. Performancemeasurement methods record usagedata (e.g., a log of events and timeswhen events occurred) during a usabilitytest. Video recording and event loggingtools [Al-Qaimari and McRostie 1999;Hammontree et al. 1992; Uehling andWolf 1995] are available to automaticallyand accurately align timing data with userinterface events. Some event logging tools(e.g., Hammontree et al. [1992]) recordevents at the keystroke or system level.Recording data at this level produces voluminous log files and makes it difficult tomap recorded usage into high-level tasks.As an alternative, two systems logevents within a UIMS. UsAGE (useraction graphing effort)2 [Uehling andWolf 1995] enables the evaluator toreplay logged events, meaning it canreplicate logged events during playback.This requires that the same study data(databases, documents) be available during playback as were used during the usability test. IDCAT (integrated data capture and analysis tool) [Hammontree et al.1992] logs events and automatically filters and classifies them into meaningful actions. This system requires a videorecorder to synchronize taped footage withlogged events. KALDI (keyboard/mouseaction logger and display instrument)2 This method is not to be confused with the UsAGEanalytical modeling approach discussed in Section 7.[Al-Qaimari and McRostie 1999] supportsevent logging and screen capturing viaJava and does not require special equipment. Both KALDI and UsAGE also support log file analysis (see Section 4.2).Remote testing methods enable testingbetween a tester and participant who arenot colocated. In this case the evaluator isnot able to observe the participant directly,but can gather data about the processover a computer network. Remote testingmethods are distinguished according towhether a tester observes the participantduring testing. Same-time different-placeand different-time different-place are twomajor remote testing approaches [Hartsonet al. 1996].In same-time different-place or remotecontrol testing the tester observes the participant’s screen through network transmissions (e.g., using PC Anywhere orTimbuktu) and may be able to hear whatthe participant says via a speaker telephone or a microphone affixed to the computer. Software makes it possible for thetester to interact with the participant during the test, which is essential for techniques such as the question-asking orthinking-aloud protocols that require suchinteraction.The tester does not observe the participant during different-time differentplace testing. An example of this approachis the journaled session [Nielsen 1993],in which software guides the participantthrough a testing session and logs theACM Computing Surveys, Vol. 33, No. 4, December 2001.

Automating Usability Evaluation of User Interfacesresults. Evaluators can use this approachwith prototypes to get feedback early inthe design process, as well as with released products. In the early stages, evaluators distribute disks containing a prototype of a software product and embeddedcode for recording users’ actions. Users experiment with the prototyp

Usability evaluation is an increasingly important part of the user interface design process. However, usability evaluation can be expensive in terms of time and human resources, and automation is therefore a promising way to augment existing approaches. This article presents an extensive survey of usability evaluation methods,

Related Documents:

Silat is a combative art of self-defense and survival rooted from Matay archipelago. It was traced at thé early of Langkasuka Kingdom (2nd century CE) till thé reign of Melaka (Malaysia) Sultanate era (13th century). Silat has now evolved to become part of social culture and tradition with thé appearance of a fine physical and spiritual .

May 02, 2018 · D. Program Evaluation ͟The organization has provided a description of the framework for how each program will be evaluated. The framework should include all the elements below: ͟The evaluation methods are cost-effective for the organization ͟Quantitative and qualitative data is being collected (at Basics tier, data collection must have begun)

On an exceptional basis, Member States may request UNESCO to provide thé candidates with access to thé platform so they can complète thé form by themselves. Thèse requests must be addressed to esd rize unesco. or by 15 A ril 2021 UNESCO will provide thé nomineewith accessto thé platform via their émail address.

̶The leading indicator of employee engagement is based on the quality of the relationship between employee and supervisor Empower your managers! ̶Help them understand the impact on the organization ̶Share important changes, plan options, tasks, and deadlines ̶Provide key messages and talking points ̶Prepare them to answer employee questions

Dr. Sunita Bharatwal** Dr. Pawan Garga*** Abstract Customer satisfaction is derived from thè functionalities and values, a product or Service can provide. The current study aims to segregate thè dimensions of ordine Service quality and gather insights on its impact on web shopping. The trends of purchases have

Chính Văn.- Còn đức Thế tôn thì tuệ giác cực kỳ trong sạch 8: hiện hành bất nhị 9, đạt đến vô tướng 10, đứng vào chỗ đứng của các đức Thế tôn 11, thể hiện tính bình đẳng của các Ngài, đến chỗ không còn chướng ngại 12, giáo pháp không thể khuynh đảo, tâm thức không bị cản trở, cái được

Oct 22, 2014 · ART ART 111 Art Appreciation ART 1301 Fine Arts ART 113 Art Methods and Materials Elective Fine Arts . ART 116 Survey of American Art Elective Fine Arts ART 117 Non Western Art History Elective Fine Arts ART 118 Art by Women Elective Fine Arts ART 121 Two Dimensional Design ART 1321 Fine Arts ART

ART-116 3 Survey of American Art ART ELECTIVE Art/Aesthetics ART-117 3 Non-Western Art History ART ELECTIVE Art/Aesthetics OR Cultural Elective ART-121 3 Two-Dimensional Design ART ELECTIVE Art/Aesthetics ART-122 3 Three-Dimensional Design ART ELECTIVE Art/Aesthetics ART-130 2 Basic Drawing